Korea to establish AI safety institute at ETRI

2024. 5. 23. 08:51
자동요약 기사 제목과 주요 문장을 기반으로 자동요약한 결과입니다.
전체 맥락을 이해하기 위해서는 본문 보기를 권장합니다.

"Today, we are witnessing another wave of innovation driven by unremitting efforts from private AI big tech companies such as OpenAI Inc. and Google LLC," said Minister of Science and ICT Lee Jong-ho. "But we cannot ignore our anxiety over associated risks to our society. In this regard, we plan to open an AI Safety Institute at the Electronics and Telecommunications Research Institute (ETRI)."

"For example, large language models (LLMs) can create diverse applications like medical devices, chatbots, and deepfakes," he said. "It is crucial to differentiate which applications are good and which are bad."

글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

[Courtesy of Korea Institute of Science and Technology]
South Korea will establish an institute focused on studying safety issues related to artificial intelligence (AI) technology.

The plan was unveiled during a press briefing following the ministerial session at the AI Seoul Summit held at the Korea Institute of Science and Technology (KIST) in Seoul on Wednesday.

“Today, we are witnessing another wave of innovation driven by unremitting efforts from private AI big tech companies such as OpenAI Inc. and Google LLC,” said Minister of Science and ICT Lee Jong-ho. “But we cannot ignore our anxiety over associated risks to our society. In this regard, we plan to open an AI Safety Institute at the Electronics and Telecommunications Research Institute (ETRI).”

ETRI is headquartered in Daejeon, South Chungcheong Province.

The launch of these institutes is a global trend, with the United States launching a consortium and countries like the United Kingdom, Canada, and Japan setting up their own institutes.

“We will strengthen cooperation among AI safety institutes and take measures to identify AI-generated content, such as watermarking such content, and enhancing collaboration for the development of international standards,” Lee said.

On the same day, ministerial level representatives of the countries participating in the AI Seoul Summit adopted a joint statement calling for the advancement of the safety, innovation, and inclusivity of AI technology.

The core of safety in the Ministerial Statement is to build a framework to measure AI risks, particularly to identify critical thresholds where AI surpasses safe levels.

Additionally, there will be a push for innovation by actively introducing AI in sectors such as administration, welfare, education, and healthcare.

The inclusion aspect will focus on bridging the digital divide through AI education.

Meanwhile, Andrew Ng, a renowned AI scholar and professor at Stanford University, who delivered the keynote speech at the summit, emphasized the need to distinguish between technology and its applications when seeking various opportunities from AI.

“For example, large language models (LLMs) can create diverse applications like medical devices, chatbots, and deepfakes,” he said. “It is crucial to differentiate which applications are good and which are bad.”

He added: ”If regulations succeed, everyone will become a loser because access to AI technology will inevitably decrease. The issue of regulating open source needs more careful consideration.“

Copyright © 매일경제 & mk.co.kr. 무단 전재, 재배포 및 AI학습 이용 금지

이 기사에 대해 어떻게 생각하시나요?