Google proposes classification method for artificial general intelligence

2023. 11. 23. 11:12
글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

[Image source: Pixabay]
Google LLC has proposed the world’s first classification method for artificial general intelligence (AGI), which comes at a time when OpenAI is starting to see a collision over AI’s safety and service expansion.

AGI is a concept introduced by Professor Mark Gubrud of the University of North Carolina in 1997, which predicted the emergence of military AI with self-replicating systems. While it was once considered a term confined to science fiction at the time, it has is now closer than ever to becoming a reality.

This is similar to the way autonomous vehicles, which were once merely an imaginary concept, gained momentum in their development after the establishment of systematic standards.

Autonomous vehicles are structured into six levels from 0 to 5 according to the SAE International standards.

On Tuesday, Google’s DeepMind research team shared a paper on the levels of AGI in its archive before publication. The concept of AGI has evolved over time, with DeepMind stating that the concept of AGI is crucial in computing research but also acknowledged it as a concept that is sometimes controversial.

The researchers, however, countered that AGI has now changed from a philosophical debate target to a practical concept.

Google DeepMind broadly classified AGI into six levels from 0 to 5. Level 0 is “No AI,” level 1 is “Emerging,” similar to an unskilled adult, level 2 is “Competent,” exceeding 50 percent of skilled adults, level 3 is “Expert,” exceeding the top 10 percent of skilled adults, level 4 is “Virtuoso,” representing the top 1 percent skilled adults, and level 5 is “Superhuman,” surpassing the abilities of skilled adults.

They also differentiated between general AGI, capable of encompassing all aspects, and specialized AGI, which handles only one field.

Level 0 is represented by the crowdsourced web pages launched by AWS in 2001 while level 1 includes OpenAI’s ChatGPT, Google Bard, and Meta Llama 2.

Although there are no levels 2 or above in general AGI, specialized AGI has already hit level 5, an example of which is AI AlphaFold, which reveals protein structures. Uncovering protein structures usually takes months to years but AlphaFold analyzes them in two to three hours.

Google DeepMind listed the requirements that AGI must meet for such standards, including functionality, generality, performance, potential, metacognition, ecological validity, and directionality.

AGI is the functionality itself, not the process, and it must have the ability to learn new tasks, it added. Even if it is not finished, the product must have potential and it should be able to achieve the values that humans prioritize.

The concept and terminology of AGI have been around for a long time, but this is the first time it is being systematically organized.

There is a growing voice in the industry to prepare safety guidelines in advance for AGI’s advent.

AGI systems with capabilities beyond human imagination, such as Google AlphaZero or AlphaFold, could be challenging for humans to discern, even when used for deepfakes, deception, and manipulation.

There is also the need to prepare for the possibility of AGI becoming uncontrollable if it surpasses human abilities.

An example is OpenAI’s Superalignment project, which aims to align AI goals with human values to ensure that AGI, when it does emerge, does not harm humans.

The traditional safety guidelines assume that tasks performed by AI systems can be evaluated and checked by superior humans. But the emergence of superior AGI could mean that safety checks may become impossible, industry insiders noted.

Copyright © 매일경제 & mk.co.kr. 무단 전재, 재배포 및 AI학습 이용 금지

이 기사에 대해 어떻게 생각하시나요?