Enabling multiple, collaborative Pentagon autonomous systems to learn from experience and adapt in fluid environments is a significant challenge for drones, a U.S. Air Force Research Laboratory (AFRL) official said this week.

“Sometimes things have undergone experiences, and they can impart to you the knowledge they received, and that’s incorporated in your knowledge base,” said Steven “Cap” Rogers, a senior scientist at AFRL and the head of AFRL’s Autonomous Capabilities Team 3. “That’s culture. I absolutely have to have these autonomous systems have culture so that they can share the knowledge they’ve assimilated, and, of course, I’d like them to share it with us humans. That’s a big challenge because many of the systems that I have built and are building, the knowledge is represented in a way that doesn’t make sense to me. It makes sense to me as somebody who understands the math, but in terms of understanding why it did something.”

In 1997, Rogers founded Qualia Computing, Inc. to improve the detection of breast cancer after his mother received a diagnosis.

“I had got out of the military and had spent 20 years making smart weapons, the last bit trying to find [Iraqi] Scuds,” he said. “I knew how to find targets and images. When I applied that to find cancers, I showed it to a physician, and the physician said, ‘Why do you call it cancer?’ I tried to explain to this guy statistical pattern recognition and decision boundaries to which he looked at me like I was an alien, which I was, and he said, ‘It doesn’t look like cancer.’ The point is the knowledge was represented in this neural net as a high dimensional space and decision boundaries, and that just wasn’t predictable to that human. What I need is a representation where all the agents involved–humans and computers–can share knowledge at the appropriate level of abstraction so that they can incorporate that into their rationale. So that’s a really big challenge.”

In 2003, Qualia and a successor company, CADx Systems, merged with iCAD, Inc. [ICAD], which continues work on computer aided detection of “hidden” cancers undiagnosed by physicians.

The Air Force has progressed significantly on the artificial intelligence (AI)/autonomy front since 2018, Rogers said, when he briefed the service’s four stars at Joint Base Andrews in Washington, D.C., on how AI would “change their world.” Now retired Air Combat Command head Gen. James “Mobile” Holmes asked Rogers whether the follow-on to the Lockheed Martin [LMT] F-22 fighter would be crewed or uncrewed, and Rogers said that he replied, “Sir. I’m gonna put my bet that it won’t be just crewed, but the way to get after that is let’s go do it. Not academic, not a simulation. Let’s go fly it.'”

In March and September 2021, Air Force Test Pilot School students at Edwards AFB, Calif., flew more than 25 sorties in the test pilot-named “Have CYLON” and “Have DUDE” AI-enabled LJ-25 Lear jet experiments, followed by 8 sorties in the “Have BATTERIES” experiment of the X-62A Variable Stability In-Flight Simulator Test Aircraft (VISTA)–a modified Block 30 F-16 fighter–in November 2022.

In December 2022, AFRL, the Air Force Test Center, and the Defense Advanced Research Projects Agency (DARPA) held 12 AI-driven VISTA flight tests to demonstrate how AFRL’s Autonomous Air Combat Operations’ (AACO) and DARPA’s Air Combat Evolution (ACE) AI algorithms would permit the X-62A to execute advanced fighter maneuvers (Defense Daily, Feb. 14, 2023).

In the early Lear jet flight tests, “test pilots were getting [air] sick,” as the first reinforcement learning algorithm–“Bronco”–was “jerking them all over the frickin’ sky” when in missile avoidance mode, for example–a situation remedied by in-flight algorithm updates, Rogers said.

Last August, AFRL said that it had conducted a three-hour AI-navigated sortie of the Kratos Defense & Security Solutions [KTOS] XQ-58A Valkyrie drone at Florida’s Eglin Test and Training Complex on July 25 in what AFRL said was the first flight using AI to control an uncrewed jet (Defense Daily, Aug. 3, 2023). AFRL said that the flight leveraged the previous two years of the lab’s work with Kratos on the Air Force’s Skyborg Vanguard program.

Data from that test flight is to feed the Air Force effort to develop multi-mission Collaborative Combat Aircraft (CCAs) to be launched from the Next Generation Air Dominance manned fighter and the Lockheed Martin F-35A.

Rogers said that one key to develop and field autonomous systems quickly is to test them in the field right off the bat.

“AI is not a thing, nor is it a capability,” he said in response to a question on the best way to include AI in modernization efforts. “It enables capabilities…I always start by telling people, ‘Don’t write a requirements document.’ That’s not the way you do AI. The way you do AI is you quickly field, and you iterate. What you have to do is find that champion…that person who’s so invested in solving the problem that they’re gonna bend these frickin’ rules and allow you the access to deliver a capability into the fight, even if it’s exercises, and then very quickly get their hands on it and iterate/repeat, and finally the generals pay attention and allow you to acquire. The way you acquire AI is continuous development and integration…You’ve got to make sure that you’re not buying a shrink wrapped piece of software. That model doesn’t work in this space.”