KAIST unveils AI technology that mimics learning by human brain

이재림 2024. 10. 23. 18:30
글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

KAIST has unveiled AI technology to pretrain artificial neural networks that resembles the learning process in a biological brain, facilitating efficiency and accuracy.
An explanation presented by KAIST on new AI technology that enables faster and more precise machine learning process. [KAIST]

A research team at KAIST has developed AI technology that mimics the brain’s learning mechanisms, enabling faster and more precise machine learning processes to advance AI.

Led by Prof. Paik Se-bum, the university team successfully pretrained artificial neural networks using random data, allowing the networks to learn more efficiently and accurately when exposed to actual data, KAIST said on Wednesday.

The team focused on the fact that a biological brain engages in spontaneous neural activity to initiate learning even before a sensory experience.

This process, called meta learning, or learning how to learn, was applied to the team’s research to demonstrate that random pretraining could naturally align forward and backward neural connections, enabling error backpropagation without weight transport.

The concept of AI is rooted in the error backpropagation learning method introduced by computer scientist Geoffrey Hinton, this year’s recipient of Nobel Prize in Physics. This technology enables neural networks to improve by adjusting weights — adjustable parameters that determine the strength of the connections between nodes — to correct errors through a process called weight transport, where errors are calculated and weights are updated layer by layer.

While groundbreaking for AI, this system differs from biological neurons, which cannot reverse signals in the same way.

KAIST’s approach, however, pretrains the neural network to self-adjust without relying on reversing these connections.

In 2016, a joint research team from Oxford University and DeepMind introduced the first concept of error backpropagation learning without the need for weight transport. However, their method was limited by slow speed and low accuracy, making it realistically impractical for actual application.

"This research breaks the traditional stereotype that only learning based on data is important in machine learning, introducing a new perspective rooted in neuroscience,” Paik said in a statement. “By addressing the weight transport problem, we have not only solved a key issue in artificial neural network training but also provided insights into how the brain learns.”

The team’s findings will be presented at the 38th annual conference on Neural Information Processing Systems in Vancouver, Canada, in December.

BY LEE JAE-LIM [lee.jaelim@joongang.co.kr]

Copyright © 코리아중앙데일리. 무단전재 및 재배포 금지.

이 기사에 대해 어떻게 생각하시나요?