Google proposes classification method for artificial general intelligence
이 글자크기로 변경됩니다.
(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.
AGI is a concept introduced by Professor Mark Gubrud of the University of North Carolina in 1997, which predicted the emergence of military AI with self-replicating systems. While it was once considered a term confined to science fiction at the time, it has is now closer than ever to becoming a reality.
This is similar to the way autonomous vehicles, which were once merely an imaginary concept, gained momentum in their development after the establishment of systematic standards.
Autonomous vehicles are structured into six levels from 0 to 5 according to the SAE International standards.
On Tuesday, Google’s DeepMind research team shared a paper on the levels of AGI in its archive before publication. The concept of AGI has evolved over time, with DeepMind stating that the concept of AGI is crucial in computing research but also acknowledged it as a concept that is sometimes controversial.
The researchers, however, countered that AGI has now changed from a philosophical debate target to a practical concept.
Google DeepMind broadly classified AGI into six levels from 0 to 5. Level 0 is “No AI,” level 1 is “Emerging,” similar to an unskilled adult, level 2 is “Competent,” exceeding 50 percent of skilled adults, level 3 is “Expert,” exceeding the top 10 percent of skilled adults, level 4 is “Virtuoso,” representing the top 1 percent skilled adults, and level 5 is “Superhuman,” surpassing the abilities of skilled adults.
They also differentiated between general AGI, capable of encompassing all aspects, and specialized AGI, which handles only one field.
Level 0 is represented by the crowdsourced web pages launched by AWS in 2001 while level 1 includes OpenAI’s ChatGPT, Google Bard, and Meta Llama 2.
Although there are no levels 2 or above in general AGI, specialized AGI has already hit level 5, an example of which is AI AlphaFold, which reveals protein structures. Uncovering protein structures usually takes months to years but AlphaFold analyzes them in two to three hours.
Google DeepMind listed the requirements that AGI must meet for such standards, including functionality, generality, performance, potential, metacognition, ecological validity, and directionality.
AGI is the functionality itself, not the process, and it must have the ability to learn new tasks, it added. Even if it is not finished, the product must have potential and it should be able to achieve the values that humans prioritize.
The concept and terminology of AGI have been around for a long time, but this is the first time it is being systematically organized.
There is a growing voice in the industry to prepare safety guidelines in advance for AGI’s advent.
AGI systems with capabilities beyond human imagination, such as Google AlphaZero or AlphaFold, could be challenging for humans to discern, even when used for deepfakes, deception, and manipulation.
There is also the need to prepare for the possibility of AGI becoming uncontrollable if it surpasses human abilities.
An example is OpenAI’s Superalignment project, which aims to align AI goals with human values to ensure that AGI, when it does emerge, does not harm humans.
The traditional safety guidelines assume that tasks performed by AI systems can be evaluated and checked by superior humans. But the emergence of superior AGI could mean that safety checks may become impossible, industry insiders noted.
Copyright © 매일경제 & mk.co.kr. 무단 전재, 재배포 및 AI학습 이용 금지
- [영상]“보는 순간 두 눈 의심”…우산 쓴 중국女와 산책 나온 동물의 정체 - 매일경제
- 활주로에서만 5시간…승객 탑승한 채 지연시킨 외국항공사 결국 - 매일경제
- “위성사진이 컬러잖아”…김정은이 봤다는 美괌기지 ‘진위논란’, 왜? - 매일경제
- 삼성도 네이버도 ‘이 커피머신’만 쓴다…韓 꽉잡은 스위스 100년 기업 - 매일경제
- 사무실 근무 중 신체 노출…‘성인방송 BJ’ 7급 공무원 또 있었다 - 매일경제
- “우리 회사 오너님 엄벌해 주세요”…재판부에 ‘중형’ 촉구한 이 회사 노조 - 매일경제
- “내년엔 숨은 여행지 방문하세요”…여행 마니아라면 주목해야 할 ‘이 나라’는 - 매일경제
- 서울교통공사 “전장연 지하철 시위 원천봉쇄…역사 진입 차단” - 매일경제
- 오픈AI 갈등 봉합 뒤에, 한국계가 있었다 - 매일경제
- “고심 끝에 도전 허락 결정” LG, 마무리 투수 고우석 ML 포스팅 허가 - MK스포츠