[Editorial] AI arms race will bring catastrophic consequences for humanity

한겨레 2021. 3. 3. 16:56
글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

A still from “Terminator Genisys” (provided by Paramount Pictures)

On March 1, the National Security Commission on Artificial Intelligence submitted its final report to US President Joe Biden and Congress, urging the US and its allies to reject demands for a global ban on autonomous weapon systems and weapons enabled by artificial intelligence (AI).

Given grave global concerns about “killer robots” that can kill people autonomously, without human operation, we are sternly opposed to attempts to use the threat of China to justify an arms race in developing AI-enabled weapons.

The report, passed unanimously by the commission, asserts that the US needs to empower AI-enabled weapons to make decisions and take action more quickly than humans can to maintain the US military superiority.

While acknowledging that “improperly designed” AI systems could “increase the risk of military escalation,” the report contends that “defending against AI-capable adversaries without employing AI is an invitation to disaster.”

A considerable portion of this voluminous report, which runs for more than 750 pages, focuses on how to block China’s ambitions to gain global preeminence in AI. The report’s chief prescriptions are investing huge amounts of money into AI development and semiconductor production capacity to maintain the US military superiority over China.

As it becomes more common to launch attacks from remotely operated drones, governments around the world are working to develop AI-enabled weapons that can decide to attack on their own, without relying on human control or judgment.

High-tech companies helping to develop lethal autonomous weapon systems (LAWS) — in the vernacular, killer “robots” — are reaping huge profits. The commission that composed this report was chaired by former Google CEO Eric Schmidt; its members include executives at companies such as Amazon, Microsoft, and Oracle that have won bids for big-ticket IT development projects at the Pentagon.

Scientists and ordinary people around the world have cautioned against the risk of developing AI-enabled weapons and made efforts to regulate such weapons.

In 2013, NGOs formed an international coalition called the Campaign to Stop Killer Robots, which advocates a full ban of AI-enabled weapon systems. In 2015, more than 1,000 people — including physicist Stephen Hawking, Apple co-founder Steve Wozniak, and Tesla CEO Elon Musk — signed an open letter warning of the risks of an AI arms race.

In 2018, UN Secretary-General António Guterres said that killer robots should be banned by international law. That same year, 50 robotic scholars from overseas declared a boycott of joint research with the Korea Advanced Institute of Science and Technology (KAIST) in protest of KAIST and Hanwha Systems’ research into AI-enabled weapon systems.

The international community must stop powerful countries such as the US and China from stumbling into an AI arms race despite the concerns of experts and the general public, considering that such an arms race could have catastrophic consequences for humanity. The time has come to frame international norms to clarify ethical and legal responsibility in an era of machines that kill people.

Please direct comments or questions to [english@hani.co.kr]

Copyright © 한겨레. All rights reserved. 무단 전재, 재배포 및 크롤링 금지.

이 기사에 대해 어떻게 생각하시나요?