[Column] Keeping AI from morphing into a Frankenstein

2023. 4. 13. 20:15
글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

I hope the AI statement also can serve as the seed.

Hasok Chang

The author is a professor of history and philosophy of science at the University of Cambridge. In my earlier column for the JoongAng Ilbo, I wrote that the development and use of chatbots should be regulated. I am not well-versed in the field of artificial intelligence (AI), but many experts share a similar view. More than 1,000 people who know something about AI signed an open letter from the Future of Life Institute, founded by Tesla and SpaceX founder Elon Musk. They argue that the development of AI beyond the level of the recently released ChatGPT-4 should be immediately halted for at least six months. During that time, the social and ethical implications of the new technology should be carefully reviewed, they insisted.

The statement drew attention partly due to Elon Musk. He is a controversial figure who recently purchased Twitter at $44 billion and, so far, has lived a life far from refraining from developing technology and advancing it to the next level. Musk, the second richest man in the world in his early 50s, has made a fortune from various businesses targeting wealthy people fascinated by new technologies. He also took the lead in private space development, claiming that humankind could move to Mars if various problems on Earth cannot be resolved. He co-founded OpenAI, which created ChatGPT. Tesla automobiles have already shown some automatic driving functions using AI.

Why on Earth did such a person come up with the idea of stopping AI development? There are some negative interpretations. First of all, Musk is criticized for being a hypocrite. He recently protested against such regulations after Tesla vehicles were recalled for problems with their self-driving functions. Besides, his argument for the suspension of AI development at this point could be linked to the need to allow such companies to reap exclusive profits while blocking others from joining the research. It’s just like nuclear powers that try to prevent other countries from developing nuclear weapons.

Let’s put Musk aside now, as others may not have signed the joint letter for personal reasons. Given the need to weigh the justification of their argument, though, there are two notable points.

First, since AI development has shown remarkable results in recent years, they argue we must carefully think about how to make the most of it without side effects. If you think about it, really-skilled technicians can achieve many things with relatively simple tools, as they fully understand how the tools can be used. In contrast, their unskilled counterparts only desire the latest tools, and even if they buy new ones, they don’t get much done.

Another point to note is that we need to thoroughly understand how AI works. This is an issue that experts in the field have started to talk about. The problem with recently developed technology is that even those who designed it do not understand exactly how AI acquires such amazing knowledge and learning skills.

To give a simple example, the creators of AlphaGo that defeated Lee Se-dol were not masters of Go. They will never win a game of Go with Lee. The amazing Go skills were learned by AlphaGo on its own, and the inventors just created a framework for the AI to learn them. It could be compared to a great student surpassing his or her teacher, who does not understand exactly how the student became so great. Something that the inventor himself does not understand cannot be called “simple technology.” Something whose operating rules are incomprehensible can turn into an uncontrollable Frankenstein.

The Asilomar AI Principle is mentioned at the beginning of the open letter. Since the development of AI can lead to fundamental changes in the history of life on Earth, it needs to be carefully planned and managed with appropriate resources. The principles were agreed to by the pioneers who worried about the future of AI in Asilomar, California, in 2017.

Asiloma was the very place where farsighted biologists concerned about the impact of genetic engineering gathered in 1975 and reached a consensus to voluntarily refrain from developing DNA synthesis technologies. The AI experts must have chosen Asilomar as the site to declare AI principles as a reminder of the biologists’ move.

While things didn’t work out as the biologists wanted, the declaration evoked many awakenings and made a certain impact on keeping genetic engineering from developing in uncontrollable directions. I hope the AI statement also can serve as the seed.

Translation by the Korea JoongAng Daily staff.

Copyright © 코리아중앙데일리. 무단전재 및 재배포 금지.

이 기사에 대해 어떻게 생각하시나요?