Google CEO Sundar Pichai had put out a detailed blog post explaining the search giant’s principles around Artificial intelligence (AI). The post makes it clear that Google’s AI technology will not be used for building weapons or mass surveillance systems. Pichai’s post on the AI principles comes even as Google is facing criticism from within its own ranks over participation in the controversial Project Maven with the US Defense Department. Some Google employees have also resigned in protests over the company’s involvements in Project Maven.
According to Pichai’s blog post, “AI is computer programming that learns and adapts,” and that it has profound potential to improve the lives of people. But he also admits that AI cannot solve all problems and its use will raise “equally powerful questions about its use.”
The Google CEO adds that since the company is leader in the field of AI, it has a “deep responsibility to get this right.” Google will have a total of seven principles to guide their AI work and research. These principles will not just be concepts, but according to Pichai, “are concrete standards, which will govern our research and product development and will impact our business decisions.”
The company’s AI principles state that the AI will be socially beneficial, it will avoid creating or reinforcing unfair bias, will be tested for safety and will be accountable to people. AI from Google will also be tested for safety, have privacy principles built within the design, and uphold “high standards of scientific excellence.”
Finally, the seventh principle notes that AI will be made available only for uses that are in accord with other six principles of the company. Google will also evaluate when to make these new technologies available on a non-commercial basis.
The blog post also notes that Google will not build AI technologies which will cause overall harm, weapons, including those which will “cause or directly facilitate injury to people.” It will also not allow AI technology from the company to be used for surveillance, which violate “internationally accepted norms.” Google will not allow its AI to be used against principles of international law and human rights.
So does this mean Google’s AI teams will not work with military at all? Not really, According to the post, Google will continue work with governments and the military in many other areas, which are cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. Pichai adds, “These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.”
The post also notes that there is space for more voices in this conversation on AI principles, and Google will “work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches.” Google notes that its AI will “respect cultural, social, and legal norms” across the countries where it operates.
If you have an interesting article / experience / case study to share, please get in touch with us at [email protected]