An expert working for the Navy recently outlined why in wartime unmanned systems will need to make lethal decisions without direct human input.
Sam Tangredi, a retired Navy captain and professor and Leidos Chair of Future Warfare Studies at the U.S. Naval War College, said systems like the Northrop Grumman
[NOC] MQ-4C Triton unmanned aerial vehicles used for anti-submarine intelligence surveillance and reconnaissance are controlled remotely from the U.S., “but in a coming conflict with a technological near peer, what’s going to happen is there’s not going to be the communications to control this. So, my view, and maybe it’s a frightened view or a pessimistic view, is that more and more the [character of war is] going to be changed by the use of autonomous systems, not without people in the loop, but people in the loop.”
The Tritons are currently based in Guam but controlled from a Navy facility in Jacksonville, Fla.
While the current DoD policy is that uncrewed systems cannot use lethal force without a human deciding, Tangredi said he does not think that will be workable during a future conflict.
Tangredi spoke during a Feb. 13 panel at the WEST 2024 conference, co-hosted by the U.S. Naval Institute and AFCEA.
“I think that will not work well, in the future, when the enemy can jam our communications, could knock down our satellites, can attack our networks. “
Instead, he said the systems will have to be operated under mission control where they have a mission, they are sent out to conduct the mission, and they decide to use force based on a mission profile.
Tangredi said while “people tend to get upset about that, but in reality we’ve used weapons like that all along.”
He compared the future lethal autonomy of unmanned vehicles to the CAPTOR (encapsulated torpedo) sea mines used since the Cold War. Those weapons fire a torpedo based on sensing the hull of ships and submarines passing by, with the mine knowing ship names/characters as the basis to decide to fire.
“It would decide what the target was and what to fire out based on its program. That is, humans did not make the decision, humans made the decision of where to place the mine. So these autonomous systems are going to be making the decision and the challenge is how do we cling to the idea that there has to be a human in the loop, pressing the button if we’re going to use lethal force?”
Tangredi warned that America’s potential adversaries “are already building systems that can use lethal force without humans. So I think that’s going to be the issue as far as the character of war.”
This conflicts with what Adm. Samuel Paparo, commander of U.S. Pacific Fleet, said the following day during the conference.
He said the Chinese government’s word and deeds are “revanchist, revisionist and expansionist” in the Western Pacific, highlighting the current information age revolution will rely on “who competes best in this, who adapts better, who is better able to combine data, computing power, AI.”
Paparo said who can win the first battle, likely in the space, cyber and information domains, would likely prevail in the conflict.
However, while the information age will augment naval combat, maneuver and fires, he said some principles are needed, like “allow machines to do tasks and calculations that machines do better, and overlay human judgment when it’s required. And maybe next never abdicate decisions on human life to machines.”
“War, its nature has not changed. It is inherently human, seeking changes in human behavior, and morally must include human accountability,” Paparo added.