THE FUTURE IS HERE

Google creates external advisory board to monitor it for unethical AI use

Google today announced a new external advisory board to help monitor the company’s use of artificial intelligence for ways in which it may violate ethical principles it laid out last summer. The group was announced by Kent Walker, Google’s senior vice president of global affairs, and it includes experts on a wide-ranging series of subjects, including mathematics, computer science, engineering, philosophy, public policy, psychology, and even foreign policy.

The group will be called the Advanced Technology External Advisory Council, and it appears Google wants it to be seen as a kind of independent watchdog keeping an eye on how it deploys AI in the real world, with a focus on facial recognition and the mitigation of built-in bias in machine learning training methods. “This group will consider some of Google’s most complex challenges that arise under our AI Principles … providing diverse perspectives to inform our work,” Walker writes.

As for the members, the names may not be easily recognizable to those outside academia. However, the credentials of the board appear to be of the highest caliber, with resumes that include multiple presidential administration positions and stations at top-notch universities spanning University of Oxford, Hong Kong University of Science and Technology, and UC Berkeley. That said, the selection — including Heritage Foundation President Kay Coles James — appears aimed, at least in part, at appealing to the Republican Party and potentially helping influence AI-related legislation down the line.

Some critics of the board have noted that James, through her involvement with the conservative think tank, has espoused anti-LGBTQ rhetoric on her pubic Twitter profile:

https://platform.twitter.com/widgets.js

Google was not immediately available for comment regarding James’ anti-LBGTQ stances and its selection process for the advisory board.

Last year, Google found itself embroiled in controversy over its participation in a US Department of Defense drone program called Project Maven. Following immense internal backlash and external criticism for putting employees to work on AI projects that may involve the taking of human life, Google decided to end its involvement in Maven following the expiration of its contract. It also put together a new set of guidelines, what CEO Sundar Pichai dubbed Google’s AI Principles, that would prohibit the company from working on any product or technology that might violate “internationally accepted norms” or “widely accepted principles of international law and human rights.”

“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai wrote at the time. “How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.” Google effectively wants its AI research to be “socially beneficial,” and that often means not taking government contracts or working in territories or markets with notable human rights violations.

Regardless, Google found itself in yet another similar controversy related to its plans to launch a search product in China, one that may involve deploying some form of artificial intelligence in a country currently trying to use that very same technology to surveil and track its citizens. Google’s pledge differs from the stances of Amazon and Microsoft, both of which have said they will continue to work the US government. Microsoft has secured a $480 million contract to provide HoloLens headsets to the Pentagon, while Amazon continues to sell its Rekognition facial recognition software to law enforcement agencies.

Update 3/26, 6:37PM ET: Added that critics of Google’s advisory board are calling on the company to answer for its selection of Heritage Foundation President Kay Coles James.