U.S. Air Force development of low-cost unmanned aerial vehicles (UAV) and the Skyborg low-cost attritable demonstrator hold significant promise, yet a key question will be the level of artificial intelligence (AI)-enabled autonomy that the service will afford to such drones.
“Low-cost UAVs can add complexity to the thinking, both our thinking and the adversary’s thinking because we can proliferate them and make them all a little different,” Air Force Chief Scientist Richard Joseph told the Mitchell Institute for Aerospace Studies’ Aerospace Nation forum this week.
Through digital engineering, the Air Force may be able to produce drones with different capabilities within a given UAV make and may be able to tailor the drones’ signatures.
“With Skyborg, those questions are beyond the S&T [Science and Technology],” Joseph said. “I think we can do it, but I think we have some other questions to answer. How mcuh autonomy do we want for a system that can deliver lethal force, and especially one that’s moving at machine speed? I’m worried about stability in peacetime with systems that are moving at machine speed and making their own judgments. We know AI is very useful, and it’s been widely used.”
Earlier this month, the Air Force awarded more than $76 million to Kratos [KTOS], Boeing [BA], and General Atomics to build Skyborg prototypes and fly them in teaming with manned aircraft (Defense Daily, Dec. 7).
The Air Force said that it expects to receive the first prototypes by next May for initial flight tests and to begin experimentation in July.
The Air Force launched Skyborg in May in an effort to field an AI-driven system to be a “quarterback in the sky” for manned aircraft. Skyborg is one of the service’s three “Vanguard” programs, which are the service’s top science and technology priorities and are meant to demonstrate the rapid viability of emerging technology (Defense Daily, Nov. 21, 2019).
“I’ve had two special assistants on my staff with Ph.Ds in AI and machine learning,” Joseph said this week. “I’ve asked them, ‘If I run the same problem with the same data twice, do i get the same answer?’ And they said, in general, ‘No.’ Well, I don’t mind if I get a whole range of answers, and I can look at that envelope and decide, but if the machine is going to pick one and then fire something and deliver lethal force, I want to be really pretty sure of what I’m doing because not only could I damage things I don’t want to damage and kill people who we do not want to kill, but I could also create tensions in a pre-conflct stage that accelerate movement toward conflict.”
Joseph recalled the time in 1987 when he led a six-month study directed by then-Defense Secretary Caspar Weinberger to devise the outlines for the first missile defense system. Paul Nitze, then an arms control adviser to former President Ronald Reagan, told the study team that whatever the blueprint, it could not “subtract from strategic stability,” Joseph said.
“I spent a lot of my time trying to figure out what strategic stability was and how it was measured because everybody I asked had a different answer, but that is important, and it’s a consideration with autonomous systems,” he said. “I think Skyborg’s a great idea. It has a lot of flexibility, RPAs [remotely piloted aircraft] even more so, but we need to come to grips with how do we coordinate this, how do we control it, how do we command it, and how much autonomy are we going to allow. I have a feeling we will not allow as much as we can provide.”