The U.S. military is embracing artificial intelligence. How could it not, given the astounding strides that it has made recently and the way it has infiltrated every aspect of the creation and dissemination of knowledge? Fears are rampant that artificial intelligence may soon overtake the human ability to control it. That it will reach “singularity,” the point in time when the development of robotics and intelligent machines will become uncontrollable; at which point it will be able to surpass the brain power of humans and will be able to evolve on its own.
So the question is, how is the US military using it? Should we be afraid that the big decisions will be made by artificial intelligence, or that AI may override the humans making them?
According to two top AI advisors in U.S. Central Command, they are using it as a tool for quickly digesting data and helping leaders make the right decision – and not to make those decisions for the humans in charge.
CENTCOM, which is tasked with safeguarding U.S. national security in the Middle East and Southeast Asia, just hired Dr. Andrew Moore as its first AI advisor. Moore is the former director of Google Cloud AI and former dean of the Carnegie Mellon University School of Computer Science, and he’ll be working with Schuyler Moore, CENTCOM’s chief technology officer.
In an interview with Fox News Digital, they both agreed that while some are imagining—or fearing– AI-driven weapons, the U.S. military aims to keep humans in the decision-making seat, and using AI to assess massive amounts of data that helps the people sitting in those seats.
“There’s huge amounts of concern, rightly so, about the consequences of autonomous weapons,” Dr. Moore said. “One thing that I’ve been very well aware of in all my dealings with… the U.S. military: I’ve never once heard anyone from the U.S. military suggest that it would be a good idea to create autonomous weapons.”
Schuyler Moore said the military sees AI as a “light switch” that helps people make sense of data and point them in the right direction. She stressed that the Pentagon believes that it “must and will always have a human in the loop making a final decision.”
The goal is, “Help us make a better decision, don’t make the decision for us,” she said.
One example they discussed in CENTCOM’s sphere of influence is using AI to crack down on illegal weapons shipments around Iran. Ms. Moore said that officials believe AI can be used to help the military narrow the number of possibly suspicious shipments by understanding what “normal” shipping patterns look like and flagging those that fall outside the norm.
“You can imagine thousands and thousands of hours of video feed or images that are being captured from an unmanned surface vessel that would normally take an analyst hours and hours and hours to go through,” Ms. Moore said. “And when you apply computer vision algorithms, suddenly you can drop that time down to 45 minutes.”
Similar efforts will likely be made in areas such as air traffic, so the U.S. can interpret threatening patterns in the air more quickly than humans could understand what traffic is “normal” and what traffic is outside the norm.
Dr. Moore said he also believes the U.S. is in a race to create a more responsible AI system compared to those being developed by U.S. adversaries. Some countries, he said, are getting “scarily good” at using AI to conduct illegal surveillance, and, “We have to be ready to counter these kinds of aggressive surveillance techniques against the United States.”
Ms. Moore said the U.S. hopes to lead the way in developing responsible AI applications. “That is something that we are able to positively influence, hopefully by demonstrating our own responsible use of it,” she said.
But there are many who are skeptical about first, the ability to confine AI within these limits, and second, to curb adversaries from going rogue and allowing AI to make their big decisions. The one fact that is clear is that we have reached a perilous crossroad in our development of AI.