A senior army officer has told a parliamentary committee the armed forces have to have confidence in weapons which rely on artificial intelligence (AI) if they are to “sleep at night.”
Lord Hamilton—who was a defence minister under Sir John Major in the 1990s—said: “There was a report some months ago in the papers, the Americans were trialling an AI system and it basically went completely AWOL, blew up the operator, killed the operator, and then blew itself up afterwards. The Americans later denied it ever happened … Let’s hypothesise that this has happened in the M.O.D. What would you then do if that happened?”
Lieut. Gen. Copinger-Symes said: “I don’t know about the incident you’re referring to, but we have very tried and tested procedures for dealing with incidents like that and working out what lessons we learn, and how we take them forward to make sure they’re safe and responsible.”
“Again, not least because if you don’t, your soldiers, sailors and aviators won’t trust that bit of kit and they won’t use that bit of kit, which is one of the reasons we take it so seriously. But above all, they won’t sleep at night, if they don’t know that bit of kit is really achieving what we need safely and responsibly,” he added.
‘Public Anxiety About Artificial Intelligence’
Later, Defence Procurement Minister James Cartlidge, told the committee: “We recognise as a government there is public anxiety about artificial intelligence. That is precisely why the prime minister will be holding an international summit in the autumn, about AI safety.”
Mr. Cartlidge said AI defence systems would not be discussed at that summit but he said, “Nevertheless it’s a very important statement of the government’s overall commitment to ensuring there is public confidence in the way we explore AI.”
He said that while the aim of using AI in weapons was partly to give Britain an edge over its competitors, they could also be used to carry out “mundane tasks” which freed up service personnel for other roles and AI could also be used to keep humans “out of harm’s way” such as by defusing ordnance.
Mr. Cartlidge said: “The Royal Navy has a gun called Phalanx which contains in its potential use a capability which can arguably be described, for part of its use, as partly autonomous/automated. But the crucial thing is that it can only operate if there is appropriate human involvement … it has to be switched on.”
Lord Houghton of Richmond, a former army officer and chief of the defence staff, asked Mr. Cartlidge about the “holistic regulatory framework” surrounding AI weapons systems.
Cartlidge Says UK Must ‘Stay Ahead of Our Adversaries’
Mr. Cartlidge replied: “To be absolutely clear to you, as far as I am concerned, we must not in any way act naively or put restraints on our country in terms of its ability to exploit AI within the bounds and parameters of international law, but in a way that ensures absolutely we stay ahead of our adversaries.”
He said “We only have to look at what’s happening in Ukraine. There is some intelligence potentially about AI used by Russia … but irrespective of that in a situation like this, where you know they are operating in a fundamentally nefarious way, they’ve invaded a sovereign country, there has to be a strong presumption that they will be pursuing investment in R&D, technology.”