Tags: Google »»»» Face Recognition »»»» Big Brother »»»» Machine Learning
Google's culture of open free-flowing discussion could be ending in the wake of an uproar over Google's partnership with the US Dept. of Defense, called Project Maven, on AI software to analyze drone footage. Since Google's participation in Project Maven was publicly revealed in March, a raging debate with Google has swirled around whether the company famous for its "Do No Evil" slogan should be involved with making weapons. Googlers even sent an open letter to Google CEO Sundar Pichai starting with the declaration "We believe that Google should not be in the business of war." That letter flatly called for Google's participation in Project Maven to be canceled.
On June 27, 2018, it is learned that Google has instituted new rules for internal discussion and workplace conduct within the company.
Project Maven was described a year ago as an "effort to help a workforce increasingly overwhelmed by incoming data, including millions of hours of video". That workforce appears to be the military intelligence analysts studying video from drone flights over war zones. According to a Gizmodo report in March, the amount of footage is so vast the analysts cannot keep up.
The idea is to use machine learning, and computer vision, techniques to analyze video and automatically identify approximately 38 classes of objects. Primarily this would be in Syria in the fight against the Islamic State of Iraq and Syria (a.k.a. ISIS).
I'll note in passing - earlier this week Amazon employees launched an internal protest over the federal government using Amazon's cloud based video analysis platform, Rekognition. The issue in that case seems to be Immigration Control and Enforcement (ICE) using real time video analysis at the USA-Mexico border in the context of the current immigration crisis at the border. Given that Project Maven is having several companies in competition, Amazon is likely to be involved with Maven as well.
A Google spokesperson told Gizmodo: “We have long worked with government agencies to provide technology solutions. This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data. The technology flags images for human review, and is for non-offensive uses only. Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.”
In April, the NY Times reported on an internal letter (below) sent by Google employees to Google CEO Sundar Pichai, as well as other issues.
The video analysis technology itself is non-offensive when used to analyze drone footage. There is a slippery slope at the bottom of which are artificially intelligent robots deciding on their own what targets to attack.
Google told the NY Times that the TensorFlow software is available to any Google Cloud customer. Indeed, go to cloud.google.com, click on Products, and you'll find several services running on Google's Cloud related to video analysis. The same is true for Amazon, where its AWS cloud platform includes video analysis services.
Part of the issue here is that Google has not done as much partnering with the US Government as has Amazon and Microsoft. Both those companies are not shy about partnering with spy agencies or the military, while Google has not openly done so. Google is, however, looking to get into the business of providing high tech solutions to the US military.
By May 30, 2018, 9to5Google reported that new policies were being developed inside Google. Despite those policies, several engineers have left Google in protest. An NY Times article the same day describes the Pentagon Contract as an Identity Crisis for Google. The Don't Be Evil motto is drilled into Googlers from the beginning, and it is hard to defend working with the military as not being Evil.
By June 1, 2018, the NY Times reported that Google announced it would not renew its Project Maven contract. This article spent most of its time on the competitive landscape. It's not clear precisely what it means for Google to cancel the contract, since the military could simply sign up for Google cloud services on its own without involving engineering contact from Google AI researchers. And for that matter there are plenty of AI companies with no compunctions against working with the military.
Google had hoped its involvement with Project Maven would open the door to more Government work in the future. That door may have closed now, unless Google is ingenious about a circuitous route to collaborating with the government.
By June 27, 2018, 9to5Google reported on the new policies. The policies includes limits on offensive language and ad hominem attacks, with employees subject to discipline for discriminating or attacking other Googlers, targets "trolling" (posting something meant to inflame discussion) and discussions that are “disruptive to a productive work environment.”
These kinds of rules can be broad and open to interpretation, especially when they call for an adherence to "Google Values." Maybe when Google was a 1,000 or 2,000 person company it could have a fairly well defined set of Values, but now that Google is 80,000 people?
For example:
Discussions that make other Googlers feel like they don’t belong have no place here. Avoid blanket statements about groups or categories of people. Trolling, name calling, and ad hominem attacks will not be tolerated.
This is nicely said, but is it worded precisely enough to be used as justification for firing someone?
Googlers letter to Sundar Pichai
Retrieved from: https://static01.nyt.com/files/2018/technology/googleletter.pdf
Dear Sundar,
We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.
Google is implementing Project Maven, a customized AI surveillance engine that uses “Wide Area Motion Imagery” data captured by US Government drones to detect vehicles and other objects, track their motions, and provide results to the Department of Defense.
Recently, Googlers voiced concerns about Maven internally. Diane Greene responded, assuring them that the technology will not “operate or fly drones” and “will not be used to launch weapons.” While this eliminates a narrow set of direct applications, the technology is being built for the military, and once it’s delivered it could easily be used to assist in these tasks.
This plan will irreparably damage Google’s brand and its ability to compete for talent. Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public’s trust. By entering into this contract, Google will join the ranks of companies like Palantir, Raytheon, and General Dynamics. The argument that other firms, like Microsoft and Amazon, are also participating doesn’t make this any less risky for Google. Google’s unique history, its motto Don’t Be Evil, and its direct reach into the lives of billions of users set it apart.
We cannot outsource the moral responsibility of our technologies to third parties. Google’s stated values make this clear: Every one of our users is trusting us. Never jeopardize that. Ever. This contract puts Google’s reputation at risk and stands in direct opposition to our core values. Building this technology to assist the US Government in military surveillance – and potentially lethal outcomes – is not acceptable.
Recognizing Google’s moral and ethical responsibility, and the threat to Google’s reputation, we request that you:
- Cancel this project immediately
- Draft, publicize, and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology