The saga of pledges against certain forms of AI has been a long one, and another entry to the list has just been made.
A new vow that features signatories and input from over 160 AI-related companies and organizations from 36 countries and 2,400 individuals from 90 countries is pledging to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.” This document is among the largest of its kind.
The pledge has been signed by some of the brightest and most notable minds in AI, including Google DeepMind, the XPRIZE Foundation, the European Association for AI (EurAI), Elon Musk, Stuart Russell, and Toby Walsh. Canadian company Clearpath Robotics, as well as one of the country’s top AI minds Yoshua Bengio, also signed the pledge.
The effort was organized by Max Tegmark, the president of the Future of Life Institute (FLI), a research organization based in Boston that works to mitigate risks facing humanity. The pledge was announced at the International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, an event that draws thousands of the world’s top AI researchers.
“I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect,” said Tegmark. “AI has huge potential to help the world – if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way.”
The goal of the pledge is to bring the world’s AI talent together to stop the development and rollout of lethal autonomous weaponry. These are weapons that can be operated without a human authorizing lethal force—different from the drones currently operated today, which still require a human to make the final call.
“Artificial intelligence (AI) is poised to play an increasing role in military systems,” begins the pledge. “There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.”
Ryan Gariepy, founder of Clearpath, is a vocal advocate for the pledge.
“Clearpath continues to believe that the proliferation of lethal autonomous weapon systems remains a clear and present danger to the citizens of every country in the world,” said Gariepy. “No nation will be safe, no matter how powerful. Clearpath’s concerns are shared by a wide variety of other key autonomous systems companies and developers, and we hope that governments around the world decide to invest their time and effort into autonomous systems which make their populations healthier, safer, and more productive instead of systems whose sole use is the deployment of lethal force.”
A meeting on lethal autonomous weaponry will take place in August at the UN. Tegmark and the pledge’s signatories are hoping this effort will force the hand of the UN to institute an international agreement between countries to not develop and use this kind of weaponry.
Agreements and pledges like this are not new in the AI world, though this new effort is veritably one of the largest. Around this time last year, tech leaders called for the UN to ban these weapons, while Canadian organizations have delivered their own version to the federal government, which was also signed by Bengio. Another Toronto group asked for machine learning to respect human rights.