US US Politics General - Discussion of President Biden and other politicians

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Status
Not open for further replies.
BidenGIF.gif
 
Last edited:
Then why aren't we watching AI sports leagues?
Because it would be boring. We know the computer can do it perfectly. When a human does it perfectly it’s interesting and exciting.
Well, SaltyBet is a thing, the closest I can think of that comes to what's being describing.

Slapping a bunch of fellas from all niche special interest media (Including the Johnny Bravo) into MUGEN and having them beat the everloving SHIT out of one another is a pretty goofy time.

You could even call it an Autistic Thunderdome, even.

Good afternoon USPG! I got you all a burger! I'll put it on the table there. Happy Weekend! Forecast calls for a vote for Speaker it looks like. I'm not THE politics guy, but from what I understand, some people don't here do not want that Johnson fella.

My final word on the whole thing is that every time a human thinks of something that the programming didn’t account for, the human experiences demoralization, wondering how many more times it can pull a rabbit out of the hat.
God.

Too fucking real.

One little input from me: This is very true. An issue of having updating visions and ideas VS a tool that can't take a human's mind and creativity into account.
 
Last edited:
The 90 percent rule kicks in for ai and air combat.

Most missions are pretty simple. Lets use one example. Wild Weasel missions. Have robot fly over and bomb x coordinates, if it gets target locked do x and y. The SAM battery its fighting might already be on automatic mode too. Because no human it can do better evasion. Also no human is good because its high risk. If it does something stupid its over enemy territory already. I foresee this being first allowed use case.

Air patrol. Fly around if you see thing you dont recognize shoot it. Human slows all of that down.

Air truck work. Go to x place. Get stuff put in. Fly to Y place.

Aerial refueling. Ditto.

The areas where a human pilot is needed do exist but its the equivalent of "but what if the 13 year gets raped!" Arguments for abortion. Edge cases. Dont use shitlib argumentation.
 
God.

Too fucking real.
God existing is too spoopy for some people, I guess.
The 90 percent rule kicks in for ai and air combat.

Most missions are pretty simple. Lets use one example. Wild Weasel missions. Have robot fly over and bomb x coordinates, if it gets target locked do x and y. The SAM battery its fighting might already be on automatic mode too. Because no human it can do better evasion. Also no human is good because its high risk. If it does something stupid its over enemy territory already. I foresee this being first allowed use case.

Air patrol. Fly around if you see thing you dont recognize shoot it. Human slows all of that down.

Air truck work. Go to x place. Get stuff put in. Fly to Y place.

Aerial refueling. Ditto.

The areas where a human pilot is needed do exist but its the equivalent of "but what if the 13 year gets raped!" Arguments for abortion. Edge cases. Dont use shitlib argumentation.
Flying a jet is more complicated than that. What if the AI can't determine friend or foe? What if the target is visually an enemy but sending friendly signals? What about the opposite? What if there's an instrument error on a high-priority mission that causes the jet to turn back when a human pilot can overcome that instrument error? If your only reply to this is "you're a shitlib" then I don't know what to tell you, slugger.
 
The areas where a human pilot is needed do exist but its the equivalent of "but what if the 13 year gets raped!" Arguments for abortion. Edge cases. Dont use shitlib argumentation.
The best argument I saw was processing time, but I don’t even know how to quantify that. For most things, the computer is going to just be orders of magnitude faster. But yeah, maybe if it was trying to terrain map to fly low, it could get bogged down, things of that nature. And then there’s also probably ways to teach it to limit the amount of information it’s trying to handle, like going low resolution
 
God existing is too spoopy for some people, I guess.

Flying a jet is more complicated than that. What if the AI can't determine friend or foe?
How do human pilots determine friend or foe when fighting beyond visual range? Is it really that much of a decision tree with lots of ambiguity? I am doubtful of that. Also you are assuming a human eyeball can tell make and model better than an ai one for a fast moving plane. Would want evidence of that.

How does the air missile determine friend from foe? The recent Gettysberg incident shows its an existing problem. The goal isnt nonexistent perfection but same or better results.

These are solvable problems and in fact have been solved. Why does the air truck need complex friend or foe identification?

What if the target is visually an enemy but sending friendly signals? What about the opposite?
What happens today when this happens? Pretty sure what happens is they call it in and get guidance. And so would the ai. If neither can call it in they fall back to the standing orders and which side to err on. You can also have it default to do nothing.

I am pretty sure most human pilots would result in the same. If they saw a f15 with russian iff they would go back to information from prior briefing and if visual id beat iff. Ai would have rules on which took precedence and would be fed situational data as well. The human pilot would call it in and get guidance. The ai would communicate and get guidance. If no guidance possible then both would default to standing orders for what to do.

Also this edge case happens once and everybody adjusts to it thereafter.
What if there's an instrument error on a high-priority mission that causes the jet to turn back when a human pilot can overcome that instrument error? If your only reply to this is "you're a shitlib" then I don't know what to tell you, slugger.
This is the 10 percent use case i talked about. But if you can field 3 ai fighters where you could only field 1 human flown before? And all 3 had instrument issues? Edge case.

There are all kinds of instrument problems that the ai can get through and not the pilot because its not a problem to begin with. F35s have had issues with cockpit pressure and life support systems. The ai fighter wouldnt have to go back because a bird hit the cockpit and made a crack.

But yes for these small cases you can bust out the human pilot. My point isnt that human pilots will go away but they will be a smaller group and the bulk of air missions will be ai.

My fundamental point is if your bringing up tiny edge cases to buttress your argument. Its a weak argument.
 
The best argument I saw was processing time, but I don’t even know how to quantify that. For most things, the computer is going to just be orders of magnitude faster. But yeah, maybe if it was trying to terrain map to fly low, it could get bogged down, things of that nature. And then there’s also probably ways to teach it to limit the amount of information it’s trying to handle, like going low resolution

AI is only good at doing things that have been done before. There are lots of times that's useful, but pivotal wars are characterized by times when past performance had little to tell you about future results.

The main problem with AI is it's backward facing. It tends to take a long time to adapt to new situations (only movie AIs learn instantly) and is utterly useless at creating those new situations. Moreover, you just don't get that many chances to fail in war. Imagine AI pilots trained on all the dogfights of WWI taking to the skies to defend Britain in WWII. The RAF would have been utterly destroyed by the Luftwaffe in a week, with zero examples of successful dogfights to learn from. Oh sure, you've got lots of data points now to train your next AI on. Too bad you're out of fighters, Operation Sea Lion is a go, and Britain is now being invaded by the Germans...with, once again, a new tactical situation with no previous examples to learn from.

This is the 10 percent use case i talked about.

The 10% use cases are the cases that decide wars.
 
How do human pilots determine friend or foe when fighting beyond visual range?
Most likely radar cross section, those sorts of characteristics. You can see a lot more with low frequency bands than you can lock onto, and you can probably tell what it is by comparing how it reflects a few different frequencies.
This is the 10 percent use case i talked about. But if you can field 3 ai fighters where you could only field 1 human flown before? And all 3 had instrument issues? Edge case.
Humans are an instrument, too. Really. We probably have a lot higher failure rate when compared side by side to say, an altimeter, depending on how you define failure. Mechanical instruments aren’t subject to hypoxia.

But yeah you covered that.

humans have bias. We tend to ignore our own mistakes if they’re small enough to overlook. But we’re really generalists, and can’t compete with an array of specialist systems
 
AI is only good at doing things that have been done before. There are lots of times that's useful, but pivotal wars are characterized by times when past performance had little to tell you about future results.

The main problem with AI is it's backward facing. It tends to take a long time to adapt to new situations (only movie AIs learn instantly) and is utterly useless at creating those new situations. Moreover, you just don't get that many chances to fail in war. Imagine AI pilots trained on all the dogfights of WWI taking to the skies to defend Britain in WWII. The RAF would have been utterly destroyed by the Luftwaffe in a week, with zero examples of successful dogfights to learn from. Oh sure, you've got lots of data points now to train your next AI on. Too bad you're out of fighters, Operation Sea Lion is a go, and Britain is now being invaded by the Germans...with, once again, a new tactical situation with no previous examples to learn from.



The 10% use cases are the cases that decide wars.
Couldn't have said it better myself.

Also, looks like there will be a second Speaker vote. 216 for Johnson, 215 for Jeffries, and 3 votes for "other." Round two begins!

Also, is it just me, or does Kamala Harris look more and more like Michael Jackson as she ages?
 
  • Autistic
Reactions: Flea Man Marbles
AI is only good at doing things that have been done before. There are lots of times that's useful, but pivotal wars are characterized by times when past performance had little to tell you about future results.
So are humans, generally. You’re crediting us with implementing new doctrine as it coincides with advances in capabilities.

How long did we sit in trenches for?

Until we could drive over them. Tactics stagnate
 
So are humans, generally.

Yes, generally...but it's the exceptions who win, like Rommel deciding to attack through the Ardennes, which had never been done before. Humans are chaos agents, and it's the innovators who win, not the guys who stick to the manual and run things based on how they worked in the past. AI doesn't have exceptions. AI doesn't innovate. AI only replicates the past, over and over.

What's your AI going to do when Rommel's tanks burst through what all past data said was an impenetrable forest? The war is already over.
 
Status
Not open for further replies.
Back