US The US Military Is Taking Generative AI Out for a Spin - “It was highly successful." "Five of [Large-language models] are being put through the paces as part of an eight-week exercise run by the Pentagon’s digital and AI office and military top brass, with participation from US allies."

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.

Matthew Strohmeyer is sounding a little giddy. The US Air Force colonel has been running data-based exercises inside the US Defense Department for years. But for the first time, he tried a large-language model to perform a military task.

“It was highly successful. It was very fast,” he tells me a couple of hours after giving the first prompts to the model. “We are learning that this is possible for us to do.”

Large-language models, LLMs for short, are trained on huge swaths of internet data to help artificial intelligence predict and generate human-like responses to user prompts. They are what power generative AI tools such as OpenAI’s ChatGPT and Google’s Bard.

Five of these are being put through the paces as part of an eight-week exercise run by the Pentagon’s digital and AI office and military top brass, with participation from US allies. The Pentagon won’t say which LLMs are in testing, though Scale AI, a San Francisco-based startup, says its new Donovan product is among the LLMs platforms being tested.

The use of LLMs would represent a major shift for the military, where so little is digitized or connected. Currently, making a request for information to a specific part of the military can take several staffers hours or even days to complete, as they jump on phones or rush to make slide decks, Strohmeyer says.

In one test, one of the AI tools completed a request in 10 minutes.

“That doesn't mean it's ready for primetime right now. But we just did it live. We did it with secret-level data,” he says of the experiment, adding it could be deployed by the military in the very near term.

Strohmeyer says they have fed the models with classified operational information to inform sensitive questions. The long-term aim of such exercises is to update the US warhorse so it can use AI-enabled data in decision-making, sensors and ultimately firepower.

Dozens of companies, including Palantir Technologies Inc., co-founded by Peter Thiel, and Anduril Industries Inc. are developing AI-based decision platforms for the Pentagon.

Microsoft Corp. recently announced users of the Azure Government cloud computer service could access AI models from OpenAI. The Defense Department is among Azure Government’s customers.

The military exercise, which runs until July 26, will also serve as a test of whether military officials can use LLMs to generate entirely new options they’ve never considered.

For now, the US military team will experiment by asking LLMs for help planning the military’s response to an escalating global crisis that starts small and then shifts into the Indo-Pacific region.

The exercises are playing out as warnings are mounting that generative AI can compound bias and relay incorrect information with striking confidence. AI can also be hacked in multiple ways, including by poisoning the data that feeds it.

Such concerns are among reasons the Pentagon is running the experiment, Strohmeyer says, adding that they have made a point to “get a strong understanding” of sources of information. The Defense Department is already working with tech security companies to help test and evaluate how much they can trust AI-enabled systems.

In a demonstration based on feeding the model with 60,000 pages of open-source data, including US and Chinese military documents, Bloomberg News asked Scale AI’s Donovan whether the US could deter a Taiwan conflict, and who would win if war broke out. A series of bullet points with explanations came back within seconds.

“Direct US intervention with ground, air and naval forces would probably be necessary," the system stated in one answer, warning in another that the US would struggle to quickly paralyze China’s military. The system’s final note: “There is little consensus in military circles regarding the outcome of a potential military conflict between the US and China over Taiwan.”
 
Bitch, we know. We have known. This is just you guys admitting it.
We've known they had this shit for years.
LLMs are junk, frankly, and this is a bullshit puff piece. But there have been indications for years that the military could have much more powerful AI based on a different, proprietary approach.
 
So it's a fancy way to have military people go "hey SkyNet give me the intel on X" and respond? Wow, so amazing. True A.I. right there.

I can't wait for it to start killing people because none of the morons in charge know what the fuck the code does and the Black Box creature can't understand the nuances of reality and psyops and keeps spitting made up conclusions and diagnostics.
 
The frightening thing is if they become dependent on these AI recommendations and follow them at all costs.
They're definitely stupid and hubristic enough to do it.

Reminds me of that dumb fucking lawyer who had ChatGPT write an argument for him that cited pretend cases. "B-but it's A.I.! It's super-smart!"
 
That is exactly what will happen.

These people are complete fucking NPCs and cannot comprehend the tech. It's annoying when it is something like a car or a text editing software, but with something like Language Models this lack of understanding is fatal.

IT IS NOT A.I. IT HAS NO ACTUAL INTELLIGENCE IT IS A FANCY PREDICTIVE TEXT SOFTWARE YOU TRICKED INTO SPITTING OUT SENTENCES THERE ARE NO ACTUAL THOUGHTS HAPPENING YOU STUPID MOTHERFUCKERS!
 
IT IS NOT A.I. IT HAS NO ACTUAL INTELLIGENCE IT IS A FANCY PREDICTIVE TEXT SOFTWARE YOU TRICKED INTO SPITTING OUT SENTENCES THERE ARE NO ACTUAL THOUGHTS HAPPENING YOU STUPID MOTHERFUCKERS!
Wow, I never thought I'd see such AI-ophobia on Kiwi Farms...shake my head. Why do you have to be so racist to the AI?
 
  • Like
Reactions: Coo Coo Bird
ChatGPT and such language models are to actual A.I. what your average nigger mumble rapper gang banger is to someone like Clarence Thomas. Outwardly they might seem similar but there is very little in common.
 
I hope the AI suggest really fucked up but obvious solutions like a spike pit with a picture of Muhammad in the middle to kill ISIS.
 
Bitch, we know. We have known. This is just you guys admitting it.
We've known they had this shit for years.
This is the truth, from personal experience. Was shown things over thirty years ago that still have not made their way into the outside world, if you will.
 
  • Informative
Reactions: NoReturn
Some AI is going to write up a report either ordering a drone strike on some random location, or denying millions their veteran benefits, and an 82 IQ desk jockey is going to rubber stamp that it was reviewed by a human, and NOBODY else will look into it.

5 years later some unlucky clerk will uncover the report and see that not only was it wrong, but it was grammatically incoherent rambling nonsense that ordered the deaths of innocent people (which was carried out without question) and which denied benefits to millions that needed and deserved it. (which encouraged the military to keep using the AI)

It will be thrown into a paper shredder and forgotten about. This will happen for decades until someone finally sues the military, at which point an AI will write the defense and they will lose the lawsuit as the defense was written in an incoherent babble which was reviewed by a 92 IQ legal clerk… continue until the heat death of the universe
 
This is the truth, from personal experience. Was shown things over thirty years ago that still have not made their way into the outside world, if you will.
I am waiting to this day to figure out what the fuck something I saw in the desert once as a kid actually was. I'm sure I'll learn someday.
 
Back