X
Innovation

Representatives from 150 tech companies sign pledge against 'killer robots'

A pledge has been signed by over 2,400 individuals working in artificial intelligence and robotics against the use of the technology for lethal reasons.
Written by Asha Barbaschow, Contributor

A pledge against the use of autonomous weapons has been signed by over 2,400 individuals working in artificial intelligence (AI) and robotics representing 150 companies from 90 countries.

The pledge, signed at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm and organised by the Future of Life Institute, calls on governments, academia, and industry to "create a future with strong international norms, regulations, and laws against lethal autonomous weapons".

"I'm excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect," president of the Future of Life Institute and physics professor at the Massachusetts Institute of Technology Max Tegmark said.

"AI has huge potential to help the world -- if we stigmatise and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilising as bioweapons, and should be dealt with in the same way."

Signatories of the pledge hail from organisations like Google DeepMind, the XPRIZE Foundation, ClearPath Robotics/OTTO Motors, the European Association for AI, and individuals such as Elon Musk.

Must read: AI 'more dangerous than nukes': Elon Musk still firm on regulatory oversight

The institute defines lethal autonomous weapons systems -- also known as "killer robots" -- as weapons that can identify, target, and kill a person, without a human "in-the-loop".

"That is, no person makes the final decision to authorise lethal force: The decision and authorisation about whether or not someone will die is left to the autonomous weapons system," the institute explains.

It said, however, this does not include today's drones, which are under human control, nor autonomous systems that merely defend against other weapons.

The pledge follows a boycott of the Korea Advanced Institute of Science and Technology (KAIST) in April by over 50 researchers from 30 countries, after it was revealed KAIST university would be opening an AI weapons lab in collaboration with Hanwha Systems, a defence company building cluster munitions, despite United Nations bans.

KAIST announced a few days later it would not be participating in the development of lethal autonomous weapons.

"KAIST does not have any intention to engage in development of lethal autonomous weapons systems and killer robots," KAIST president professor Sung-Chul Shin told ScienceInsider in response to the boycott.

"KAIST will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control."

University of New South Wales (UNSW) Scientia Professor of artificial intelligence Toby Walsh, who organised the KAIST boycott and signed this week's pledge, said it's important to keep life or death decisions under human control.

"We cannot hand over the decision as to who lives and who dies to machines. They do not have the ethics to do so. I encourage you and your organisations to pledge to ensure that war does not become more terrible in this way," Walsh said on Tuesday.

PREVIOUS AND RELATED

Should we ban killer robots?

Academics and a group of NGOs have different opinions on how autonomous weapons should be defined and regulated.

University boycott ends after KAIST confirms no 'killer robot' development

The boycott involving over 50 researchers from 30 countries has ended after South Korea's KAIST university agreed to not partake in the development of lethal autonomous weapons.

Elon Musk among tech founders to call for UN to ban 'killer robots'

Founders of AI and robotics companies around the world are urging the UN to ban the development and use of autonomous weapons before it's too late.

Elon Musk fears AI may lead to World War III, as researchers study the risks of 'stupid, good robots' (TechRepublic)

Like most technology, there is a fine line between good and evil in its use. What happens when AI built with good intentions goes bad?

Editorial standards