Science Google cooperates with the DOD in creating AI for drones

Google has partnered with the United States Department of Defense to help the agency develop artificial intelligence for analyzing drone footage, a move that set off a firestorm among employees of the technology giant when they learned of Google’s involvement.

Google’s pilot project with the Defense Department’s Project Maven, an effort to identify objects in drone footage, has not been previously reported, but it was discussed widely within the company last week when information about the project was shared on an internal mailing list, according to sources who asked not to be named because they were not authorized to speak publicly about the project.

Some Google employees were outraged that the company would offer resources to the military for surveillance technology involved in drone operations, sources said, while others argued that the project raised important ethical questions about the development and use of machine learning.

Google’s Eric Schmidt summed up the tech industry’s concerns about collaborating with the Pentagon at a talk last fall. “There’s a general concern in the tech community of somehow the military-industrial complex using their stuff to kill people incorrectly,” he said. While Google says its involvement in Project Maven is not related to combat uses, the issue has still sparked concern among employees, sources said.

Project Maven, a fast-moving Pentagon project also known as the Algorithmic Warfare Cross-Functional Team (AWCFT), was established in April 2017. Maven’s stated mission is to “accelerate DoD’s integration of big data and machine learning.” In total, the Defense Department spent $7.4 billion on artificial intelligence-related areas in 2017, the Wall Street Journal reported.

The project’s first assignment was to help the Pentagon efficiently process the deluge of video footage collected daily by its aerial drones—an amount of footage so vast that human analysts can’t keep up, according to Greg Allen, an adjunct fellow at the Center for a New American Security, who co-authored a lengthy July 2017 report on the military’s use of artificial intelligence. Although the Defense Department has poured resources into the development of advanced sensor technology to gather information during drone flights, it has lagged in creating analysis tools to comb through the data.

“Before Maven, nobody in the department had a clue how to properly buy, field, and implement AI,” Allen wrote.

Maven was tasked with using machine learning to identify vehicles and other objects in drone footage, taking that burden off analysts. Maven’s initial goal was to provide the military with advanced computer vision, enabling the automated detection and identification of objects in as many as 38 categories captured by a drone’s full-motion camera, according to the Pentagon. Maven provides the department with the ability to track individuals as they come and go from different locations.

Artificial intelligence is already deployed in law enforcement and military applications, but researchers warn that these systems may be significantly biased in ways that aren’t easily detectible. For example, ProPublica reported in 2016 that an algorithm used to predict the likelihood of recidivism among inmates routinely exhibited racial bias. (OP's note: lol)

Although Google’s involvement stirred up concern among employees, it’s possible that Google’s own product offerings limit its access to sensitive government data. While its cloud competitors, Amazon and Microsoft Azure, offer government-oriented cloud products designed to hold information classified as secret, Google does not currently have a similar product offering.

A Google spokesperson told Gizmodo in a statement that it is providing the Defense Department with TensorFlow APIs, which are used in machine learning applications, to help military analysts detect objects in images. Acknowledging the controversial nature of using machine learning for military purposes, the spokesperson said the company is currently working “to develop polices and safeguards” around its use.

“We have long worked with government agencies to provide technology solutions. This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data,” the spokesperson said. “The technology flags images for human review, and is for non-offensive uses only. Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.”

The Defense Department set an aggressive timeline for Maven—the project was expected to be up and running just six months after it was founded, and reportedly has been deployed in the fight against the Islamic State since December.

To meet the aggressive timetable, the Defense Department partnered with AI experts in the tech industry and academia, working through Defense Information Unit Experimental, the department’s tech incubation program, and the Defense Innovation Board, an advisory group created by former Secretary of Defense Ash Carter to bridge the technological gap between the Pentagon and Silicon Valley.

Schmidt, who stepped down as executive chairman of Google parent company Alphabet last month, chairs the Defense Innovation Board. During a July meeting, Schmidt and other members of the Defense Innovation Board discussed the Department of Defense’s need to create a clearinghouse for training data that could be used to enhance the military’s AI capability. Board members played an “advisory role” on Project Maven, according to meeting minutes, while “some members of the Board’s teams are part of the executive steering group that is able to provide rapid input” on Project Maven.

Maven is overseen by the undersecretary for defense intelligence and Lt. Gen. John Shanahan was selected as the project’s director. Maven was designed to be the spark, according to Shanahan, that would kindle “the flame front of artificial intelligence” across the entire Defense Department.

By summer 2017, the team set out to locate commercial partners whose expertise was needed to make its AI dreams a reality. At the Defense One Tech Summit in Washington, Maven chief Marine Corps Col. Drew Cukor said a symbiotic relationship between humans and computers was crucial to help weapon systems detect objects.

Speaking to a crowd of military and industry technology experts, many from Silicon Valley, Cukor professed the US to be in the midst of AI arms race. “Many of you will have noted that Eric Schmidt is calling Google an AI company now, not a data company,” he said, although Cukor did not specifically cite Google as a Maven partner.

“There is no ‘black box’ that delivers the AI system the government needs, at least not now,” he continued. “Key elements have to be put together … and the only way to do that is with commercial partners alongside us.”

A spokesperson for the Defense Department declined to say whether Google was its only private industry partner on Project Maven or to clarify Google’s role in the project.

“Similar to other DOD programs, Project Maven does not comment on the specifics of contract details, including the names and identities of program contractors and subcontractors,” the spokesperson said.
 
“The technology flags images for human review, and is for non-offensive uses only. Military use of machine learning naturally raises valid concerns.
For example, ProPublica reported in 2016 that an algorithm used to predict the likelihood of recidivism among inmates routinely exhibited racial bias.

Right, because the "technology" is totally NOT going to be racist. I feel like someone in the military is going to snitch on this technology for being racist. I just feel like it's going to happen.
 
"Many of you will have noted that Eric Schmidt is calling Google an AI company now, not a data company,”

Yes, because they have all the data they need and are ready to implement that as an "AI" company. It was obvious from the outset this would be their next logical step. Though, keep giving your info for free to Google everyone.
You can't be "doing evil" if you're busy being villainous!
 
Google’s Eric Schmidt summed up the tech industry’s concerns about collaborating with the Pentagon at a talk last fall. “There’s a general concern in the tech community of somehow the military-industrial complex using their stuff to kill people incorrectly,” he said.

Written by a true computer autist. I feel like you could read that two ways, "There's a general concern in the tech community of the MIC using the tech community's stuff to kill people the wrong way," and the (probably more correct) intention of " There's a general concern in the tech community of the MIC using the tech community's stuff to kill people. This is an incorrect assumption." Either way, it's a remarkably hard-to-read quote.

researchers warn that these systems may be significantly biased in ways that aren’t easily detectible. For example, ProPublica reported in 2016 that an algorithm used to predict the likelihood of recidivism among inmates routinely exhibited racial bias.

More blacks and brownish hispanics commit crime in the U.S., and A.I. properly interprets the data and predicts their accurate behavior of being repeat offenders. This isn't wrong. This is A.I. working as intended and you don't like the truth. There's a deeper discussion to be had of the penalties for drug and firearm trafficking & prison inmate culture encouraging these trends, but don't call the computer racist for pinpointing spade behavior in spades. Disgusted, disappointed sigh.
 
I don't understand this desire to even the playing field against our enemies. If we've already determined they need to die, we should be efficient about it. The only reason you'd willingly pull punches is if you're not 100% interested in winning.

And I guess that's really what's going on under the surface. People don't want us to win and don't believe in what we're doing.

But c'mon, they're already bitching about America and our military actions (for good reasons or not) 24/7. Why do they pause their bitching right now and say "oh, but this time we've got a good reason: ethics", and expect us to take it at face value?
Written by a true computer autist. I feel like you could read that two ways, "There's a general concern in the tech community of the MIC using the tech community's stuff to kill people the wrong way," and the (probably more correct) intention of " There's a general concern in the tech community of the MIC using the tech community's stuff to kill people. This is an incorrect assumption." Either way, it's a remarkably hard-to-read quote.
I don't see how you could've gotten the latter interpretation from that quote. "Incorrectly" is an adverb, and the only thing that could apply to is "kill".

The quote is very straightforward.

The only reason to question that interpretation is that it's incredibly tone-deaf for his audience. But that's not really a problem with the quote, and more just looking at the overall situation where sjws will flip shit whenever we improving our killing technology.
More blacks and brownish hispanics commit crime in the U.S., and A.I. properly interprets the data and predicts their accurate behavior of being repeat offenders. This isn't wrong. This is A.I. working as intended and you don't like the truth. There's a deeper discussion to be had of the penalties for drug and firearm trafficking & prison inmate culture encouraging these trends, but don't call the computer racist for pinpointing spade behavior in spades. Disgusted, disappointed sigh.
Read the article. It's actually some pretty retarded stuff. For example:
Northpointe’s core product is a set of scores derived from 137 questions that are either answered by defendants or pulled from criminal records. Race is not one of the questions. The survey asks defendants such things as: “Was one of your parents ever sent to jail or prison?” “How many of your friends/acquaintances are taking drugs illegally?” and “How often did you get in fights while at school?” The questionnaire also asks people to agree or disagree with statements such as “A hungry person has a right to steal” and “If people make me angry or lose my temper, I can be dangerous.”
The racial angle is just an attempt to drum up clicks, but it's legitimately a stupid tool for real reasons.

You shouldn't be punished because your dad stole a bike in his twenties. Your punishment should be entirely based on the merit of the facts of your case.
 
When I was a kid I would always deliberately fuck up surveys like this, by picking some insane historical personality like Hitler or Mao and then just answering the questions like I thought they would. Also, I'd encourage everyone else I knew to fuck them up, too.

You really can't trust shit like this.
 
Anyone remember this?
Google-evil.jpg
No? Alright then, carry on.
 
Back