How not to abuse AI

2021. 11. 29. 19:38
글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

We are wrapping up another year. I pose a question to myself. How many times have I updated knowledge or adjusted beliefs I thought right over the last year? Were there times that I suddenly realized I had thought wrong upon..

Kim Byoung-pil The author is a professor at the School of Business and Technology Management at the Korea Advanced Institute of Science and Technology (Kaist).

We are wrapping up another year. I pose a question to myself. How many times have I updated knowledge or adjusted beliefs I thought right over the last year? Were there times that I suddenly realized I had thought wrong upon hearing the argument of another? Embarrassingly, not many.

Israeli psychologist Daniel Kahneman, who had won the 2002 Nobel prize in economics, coolly acknowledged the argument and conclusion of a young scholar during a debate in a forum. It could not have been easy for a great thinker and Nobel laureate to publicly admit he had been wrong. Few like to acknowledge their errors. But Kahneman said he tries to enjoy admitting mistakes as that makes one become better.

In the age of artificial intelligence, it has become more and more difficult to change our thoughts. As we tune into the videos YouTube recommends and read the choice of books Facebook offers, we are fed with views and knowledge in line with our thoughts and come to less contact with differing or novel views.

According to a report by a Google researcher, YouTube AI is optimized for two objectives. One is to encourage users to click into the videos of its choice and watch them longer. The other is to help users draw as many “likes” as possible upon seeing a video and share it with others. Google analytics must have been doing its job pretty well since more users are tuning into the recommendations and spending more time watching them.

Although the AI-driven mechanism has increased pleasure and convenience for viewers, its side effects are not small. Some argue that AI guidance is aggravating social division and polarization. As people differ greatly in their thoughts and do not share their thoughts, finding an agreement or compromise becomes more difficult. The phenomenon has become common across the world. Sociologists find it is at a serious level.

But we can hardly do without AI. Humanity generates data in the millions of terabytes. Without AI analytics, it is impossible to find the content we need from the sea of data. How can we use the system without causing side effects? As there is no easy answer on which everyone can agree, it is a challenge over which we must put our heads together.

Some experts propose the operators go public with the way their AI analytics work and that society keeps watch over the movement. Although well-intended, some could abuse the system. Some could be tempted to raise viewership to gain fame, manipulate public interest or employ irregular means to raise ad revenue.

The Google search engine case is an example. In the early stage, Google ranked searches based on the number of links from other sites. After its way was publicized through a research paper, some began to make money by raising the search rank. Website operators even exchanged links to bump up the search rank. After frequent abuses, Google stopped sharing the way it ranks searches.

AI-enabled recommendations will become more sophisticated. But the advance must not narrow our thoughts further. To understand the thoughts of our family, friends, and colleagues — and for the society to go on improving system and policies — we should have more opportunities to change our thoughts by hearing out and accepting differing views. We need to learn from the humbling ways of Prof. Kahneman. AI guidance also should be advanced in the open-minded direction.

Translation by the Korea JoongAng Daily staff.

Copyright © 코리아중앙데일리. 무단전재 및 재배포 금지.

이 기사에 대해 어떻게 생각하시나요?