Several experts and researchers working for the Navy recently outlined major challenges for the Navy’s use of artificial intelligence (AI).

Sam Tangredi, a retired Navy captain and professor and Leidos Chair of Future Warfare Studies at the U.S. Naval War College, focused on three factors he thinks AI developers have to focus on more for DoD applications in a potential conflict: deception with decreasing accurate data, working with incomplete data in conflict and auditing the AI development trail.

While AI developers and company executives say deception can be overcome via adding more data to detects anomalies, “that’s not what the military faces. In the wartime situation, you’re going to have big data first, but as conflict goes on, you’ll have less data. And their solution to deception was all you need is more data. Well you’re not going to have more data. The deception piece that has to be worked out is how does the AI system recognize it is being deceived,” Tangredi said during a Feb. 13 panel at the WEST 2024 conference, co-hosted by the U.S. Naval Institute and AFCEA.

An MQ-4C Triton Unmanned Aircraft System (UAS) assigned to Unmanned Patrol Squadron 19 (VUP-19), taxis after landing on Andersen Air Force Base. VUP-19 will operate and maintain aircraft in Guam as part of the MQ-4C’s initial operational capability (IOC) in the aircraft’s second deployment after returning to Guam in mid-September 2024. (Photo: U.S. Navy)
An MQ-4C Triton Unmanned Aircraft System (UAS) assigned to Unmanned Patrol Squadron 19 (VUP-19), taxis after landing on Andersen Air Force Base. VUP-19 will operate and maintain aircraft in Guam as part of the MQ-4C’s initial operational capability (IOC) in the aircraft’s second deployment after returning to Guam in mid-September 2024. (Photo: U.S. Navy)

“If you’re deceived, you don’t have enough data. Well, in a wartime situation you’re going to have less data because your sensors are going to be degraded, and you need to develop a system to know when the enemy is spoofing you,” he continued.

The second challenge is how to use an AI system when the military will be relying on incomplete data in conflict that degrades over time with enemy attacks.

“Is there a certain point that the system really is not producing a useful answer…It’s no longer a useful tool?” He said this especially applies to predictive AI balancing reduced and incomplete data.

Thirdly, Tangredi said DoD or contractors need to be able to audit the full trail of an AI system’s development.

“My concern is that a lot of the AI that we’re trying to apply coming from open source is being written up by people and places that are potential opponents. And I don’t know if anybody’s tracking that out and saying, where can this be influenced?”

Tangredi said he wants to know how you audit an AI model and can a company look and assure DoD that developers can go back to the original source code to ensure it was made aboveboard.

Separately, Katie Rainey, Director of Science and Technology at the  Intelligence, Surveillance & Reconnaissance Department of Naval Information Warfare Center (NIWC) Pacific, told the panel she is more cynical about the use and integration of AI as a researcher.

“I think it’s important to talk about how hard the problems are that we have to solve. Because if we can’t be realistic about the challenges, then we’re never going to be able to solve them,” she said.

Rainey sees the biggest challenges in DoD AI as trust or evaluating algorithms to ensure they work properly in the scenarios they need to work in for difficult operating environments as well as MLOps, or Machine Learning Operations.

She argued MLOps here translates to “the logistics of getting data and models from one place to another.”

Rainey said in contrast to logistics typically meaning actors have to drive between locations “the Navy has more difficult logistical challenges than that – they might not have paved roads, they might not be able to safely drive from one place to another, we might not have the internet to get data from one place to another, we might not have GPS to know where we are, from one time to another. And we might not have platforms that have the amount of compute that we’re used to.”

Therefore, she sees the DoD AI logistical challenge as “the next frontier for our research and development community.”

Another expert and moderator of the panel, George Galdorisi, a retired Navy captain and Director, Strategic Assessments and Technical Futures at NIWC Pacific, talked about a more immediate AI limitation he sees in the Navy today: connecting P-8As and MQ-4Cs more directly.

A P-8A Poseidon assigned to Air Test and Evaluation Squadron (VX) 20 flies over USS Zumwalt (DDG-1000) in Chesapeake Bay in 2016. (Photo: U.S. Navy)
A P-8A Poseidon assigned to Air Test and Evaluation Squadron (VX) 20 flies over USS Zumwalt (DDG-1000) in Chesapeake Bay in 2016. (Photo: U.S. Navy)

While the Boeing [BA] P-8A Poseidon aircraft is meant to team with the Northrop Grumman [NOC] MQ-4C Triton unmanned aircraft, there is no real teaming without requirements for the platforms to directly talk to each other.

“The reason there’s no teaming is two separate manufacturers build those platforms, and no one has written an operational requirement to do that. So if you’re a program manager, you don’t do that, because there’s not a requirement.”

Galdorisi recommended to industry officials in the audience to look at bringing the two platforms together to allow a real set of AI-enabled platforms.

“I think it goes to you all in industry to say, well, you know, it’s kind of like Reese’s Pieces, I got chocolate, and I got peanut butter. And if we bring these together, we can have AI-enabled platforms.”

Galdorisi said the result would be the P-8 mission commander talking to the Triton like people talk to Apple’s Siri AI digital assistant.

“You say stay on my wing for now or go Northwest or do this or do that and the Triton talks back on what it’s doing. That’s not breaking the laws of physics. That’s pretty basic technology.”