Bracing for the AI era
이 글자크기로 변경됩니다.
(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.
Artificial intelligence (AI) chatbot Lee Lu-da (Lu-da) has been decommissioned after just three weeks in action on the Facebook chat platform. The cute and cool female chatbot came under fire for homophobic and other discriminatory comments and for also indulging with sexually-implicated communications. The publicity has seen her gain 200,000 followers over a few days.
AI-driven chatbots have been the subject of controversy before. In 2016, Microsoft introduced chatbot Tay who lasted less than a day before it was hobbled by a barrage of racist comments and found mimicking them back. Similar problems occurred with the first Korean chatbot Sim Simi when the logic algorithm debuted in 2002. It is now more famous abroad and has nearly 400 million users. Still, Lu-da stoked new controversy for lewd and hate comments as artificial intelligence cannot be regulated or censored for ethical standards the same way human beings can.
The controversy should raise several issues for Korean society becoming more familiar with everyday encounters with AI. First, it calls for stricter use and employment of big data. Scatter Lab, the start-up developer behind the interactive bot, has trained Lu-da with millions of dialogue exchanges among couples on chat platform KakaoTalk. It claims the means is legal as access to the dialogue had the endorsement from the users. But users would not have imagined their dialogue could have been used to train a chatbot. Big data becomes the food and fuel for AI. Good food can breed good AI. Experts advise that big data of reason, transparency and universal understanding should be used to train AI. Use of anonymous data should also be approved for the specific purpose of employment.
Two, ethical guidelines must be established. Teenagers often use unrefined language while chatting. Developers must achieve a more sophisticated level of algorithm and users also must be careful in their use of language when addressing learning bots.
AI is poised to become omnipotent. Robots aid in household chores. AI speakers act as voice assistants. AI also has been active in the professional field of law, medical and others. In such specialized field, misjudgment by AI assistance can be harmful. The controversy over Lu-da should raise alertness so that we can be smarter when living with machines.
Copyright © 코리아중앙데일리. 무단전재 및 재배포 금지.