Artificial intelligence (AI) experts from around the world published a letter to the United Nations on Sunday, calling on the organization to address the danger posed by lethal autonomous weapons.

“As companies building the technologies in artificial intelligence and robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm,” the letter said. “Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

The letter was released by Toby Walsh, Scientia Professor of artificial intelligence at the University of New South Wales in Sydney and key organizer of the letter, as part of the opening of the International Joint Conference on Artificial Intelligence (IJCAI 2017) in Melbourne, Australia, and is the first time that AI and robotics companies have released a joint stance on the topic.

“Nearly every technology can be used for good and bad, and artificial intelligence is no different,” said Walsh. “It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis. However, the same technology can also be used in autonomous weapons to industrialize war. We need to make decisions today choosing which of these futures we want. I strongly support the call by many humanitarian and other organizations for an UN ban on such weapons, similar to bans on chemical and other weapons.”

“I signed the open letter because the use of AI in autonomous weapons hurts my sense of ethics, would be likely to lead to a very dangerous escalation, because it would hurt the further development of AI’s good applications, and because it is a matter that needs to be handled by the international community, similarly to what has been done in the past for some other morally wrong weapons (biological, chemical, nuclear),” Yoshua Bengio, founder of Element AI and a leading ‘deep learning’ expert, said.

“Unless people want to see new weapons of mass destruction–in the form of vast swarms of lethal microdrones– spreading around the world, it’s imperative to step up and support the United Nations’ efforts to create a treaty banning lethal autonomous weapons,” Stuart Russell, founder and vice-president of Bayesian Logic and letter signatory. “This is vital for national and international security.”

Among the signatories is Tesla and SpaceX founder Elon Musk, who has long warned against the dangers of autonomous robots.

“We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems,” the letter said. “Many of our researchers and engineers are eager to offer technical advice to your deliberations.”

Though the letter expressed regret that the first scheduled GGE meeting was canceled due to states not paying their funds to the UN, they entreated the High Contracting Parties of the group to work hard at preventing an arms race in autonomous weapons.

“We therefore implore the High Contracting Parties to find a way to protect us all from these dangers,” the letter said.

Read More About
About
Jessie Bur
Jessie Bur
Jessie Bur is a Staff Reporter for MeriTalk covering Cybersecurity, FedRAMP, GSA, Congress, Treasury, DOJ, NIST and Cloud Computing.
Tags