Googleis attempting to apply its “don’t be evil” ethos to artificial intelligence.
Today, CEO Sundar Pichai published a lengthy set of “AI Principles” in which he promises the company won’t use AI to “cause overall harm” or create weapons.
Meant to address growing employee concerns about how the company approaches AI, the document includes “seven principles to guide our work going forward,” as well as four “AI applications we will not pursue.”
The latter group includes:
1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
3. Technologies that gather or use information for surveillance violating internationally accepted norms.
4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.
The move comes less than a week after the company announced that it planned to sever tieswith the Pentagon, after a contract with the Department of Defense sparked internal protests at the company over something called Project Maven. Employees were concerned that the company, once known for its slogan “don’t be evil,” was using its AI capabilities to help improve U.S. military drones.
“We believe that Google should not be in the business of war,” employees wrote. “Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”
Today we’re sharing our AI principles and practices. How AI is developed and used will have a significant impact on society for many years to come. We feel a deep responsibility to get this right. https://blog.google/topics/ai/ai-principles/ …
Though Pichai’s principles don’t address Project Maven directly, it does commit to not using AI to create weapons or “other technologies whose principal purpose” is to injure people.
Still, the CEO was careful to note that Google still plans to work with the military “in many other areas.”
“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” Pichai wrote. “These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.”
Some critics were also quick to point out that some of the language comes with more than a few loopholes that give Google significant leeway in implementing its new standards.
“Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.”
this can justify anything?
But while it’s not clear yet whether these principles go far enough to address employees’ concerns, the new rules could have an impact that reaches far beyond the walls of Google. Though other tech companies haven’t faced the same level of criticism over military contracts, Google’s move could pressure other companies to make similar commitments.
Outside of how it handles work with the government, Pichai’s principles also address other controversial areas of AI, such as a promises to “avoid creating or reinforcing unfair bias” and to “incorporate privacy design principles.”
Issues like privacy and bias have become increasingly important as tech companies grapple with how to responsibly implement increasingly powerful AI tools. And while many experts have called for some type of AI ethics regulation, most companies have been figuring it out as they go along.
But with Google publicly committing to standards, even basic ones, it could set an example for others to do the same.