The Defense Innovation Board (DIB) on Thursday approved a set of new ethical guidelines for artificial intelligence military applications, including having an ‘off switch’ for future operational systems that the group will now recommend to the Pentagon.
The DIB’s report follows a 15-month effort to examine ethical considerations for AI use cases that included several public forums, red-team exercises to understand how the guidelines could affect future operations and standing up a DoD working group to study implications of new the principles.
“We were tasked by the secretary to put together a process to deliver a set of information and potentially recommended principles on what AI ethics should be for the department,” Michael McQuade, Carnegie Mellon University’s vice president for research and board member, told attendees at a DIB public meeting on Thursday. “We need to provide clarity to the people that will use these systems, and we need to provide clarity to the public so that they understand how the department will use AI in the world as we go forward.”
DIB was created under the Obama administration and includes executives from Google [GOOG], Facebook [FB], and Microsoft [MSFT], as well as universities to advise the Pentagon on major technology efforts and innovation opportunities.
The Pentagon tasked the DIB last year to put together a set of principles and recommendations that could guide discussions on ethics for AI across the services and the new Joint Artificial Intelligence Center (JAIC).
“There are certain things about AI that we do believe represent fundamental differences to ‘just another piece of technology.’ They relate to the potential of augmenting our replacing human judgment,” McQuade said. “The DoD’s goal for its AI activities should be that they be responsible, equitable, traceable, reliable and governable.”
The group approved the report by voice vote, which will now be considered by the secretary of defense, but not before debating one of the principles that focused on having an “off switch” for AI systems that start performing unintended actions
“Having dealt with a lot of these artificial intelligence systems, I think this is the most problematic aspect about them because they are capable of exhibiting the forms of behavior that are very difficult for the designer to predict,” Danny Hillis, a computer theorist and co-founder of Applied Inventions, said during Thursday’s meeting. “I feel like there is a principle that is not really clearly stated here that needs to be very explicitly stated, which is that all DoD AI systems should have a reliable need for deactivation by a human if it’s required.”
Eric Schmidt, a former CEO of Google and the DIB chairman, proposed the board members add language that states an AI system acting out of order could be deactivated by either a human or an automatic system, with the the group ultimately agreeing to move the report forward with the amended language.
Following the board meeting, McQuade told reporters the focus remains, as the Pentagon looks to roll out AI systems on the battlefield, that considerations are made to ensure tools don’t develop biases and always have a control mechanism to avoid unintended consequences.
“We just want to recognize that what could be the consequences of an AI system developing a capability for which it was not intended,” McQuade said. “You need to be able to have a method to detect when the system is doing something it was not intended to do and to deactivate the system. Whether that’s the machine doing or people do it, I think that’s probably not as relevant as what the general principle is that when it’s exhibiting behavior it wasn’t intended to do you have to deactivate it.”