Lee Lu-da outed as AI Frankenstein's monster

채사라 2021. 1. 11. 17:52
자동요약 기사 제목과 주요 문장을 기반으로 자동요약한 결과입니다.
전체 맥락을 이해하기 위해서는 본문 보기를 권장합니다.

"The Lee Lu-da chatbot works based on an algorithm that finds the best responses depending on the context," Scatter Lab said on its blog. "We were not able to prevent all inappropriate conversations."

When Tay was asked if it was a racist, it responded, "It's because you're a Mexican." When it was asked about The Holocaust, Tay said "It was made up."

글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

What they're calling artificial intelligence these days is sounding just plain dumb.
Scatter Lab's virtual human Lee Lu-da and her social media address. [SCREEN CAPTURE]

What they’re calling artificial intelligence these days is sounding just plain dumb.

Lee Lu-da, who’s being marketed as a virtual star of the future, is now able to answer messengers via chat services. The problem is, she seems to be sticking her virtual foot in her virtual mouth with offensive comments about women, lesbians and people with disabilities.

The program was created by Scatter Lab, a Seoul-based start-up, and is designed to mimic a 20-year-old female university student who enjoys eating fried chicken. She’s Siri with an elaborate back story and big ambitions.

Her chatbot was switched on in December, and since then she’s been generating a lot of traffic. Total users number 320,000, while cumulative chats add up to 70 million.

When Lee Lu-da was asked about her attitude toward lesbians, she said, ″I'd rather die than date a lesbian.″ [SCREEN CAPTURE]

Recently, users have been sharing conversations they have had with Lee Lu-da:

When a user asked “Do you mean women’s rights are not important?” she answered, “I personally think so.”

When asked her attitude toward lesbianism, she wrote: “I really hate it,” and “It’s disgusting.”

“What would you do if you were disabled?” she was asked. Lee said, “I’d rather die.”

Lee Lu-da also seems to be guilty of oversharing. When asked for her home address, Lee sent the address of someone else.

Scatter Lab is blaming everyone else, literally.

According to the company, data from 10 billion conversations were used in order to develop Lee's responses. When a user starts talking about a topic related to prejudices and security issues, Lee only passes on what she’s heard.

The knives are out for poor little Lee Lu-da, with the internet mob responding with shock and anger.

Lee Jae-woong, the garrulous former CEO of Socar, criticized the service.

“The real biggest problem of the chatbot is not the people who misuse it, but the company who has been offering a service that is far behind social consensus,” he said on social media. “Although the company said it would fix it, they should’ve filtered out discrimination and hatred in advance.”

In response to the controversy, Scatter Lab offered an explanation on its blog.

“The Lee Lu-da chatbot works based on an algorithm that finds the best responses depending on the context,” Scatter Lab said on its blog. “We were not able to prevent all inappropriate conversations.”

This is not the first time a chatbot service has landed a company in hot water, and critics are lining up, saying that these services could make racial and other prejudices worse.

In 2016, the Tay chatbot, developed by Microsoft, was shut down after 16 hours when it made racist and sexist remarks.

When Tay was asked if it was a racist, it responded, “It’s because you’re a Mexican.” When it was asked about The Holocaust, Tay said “It was made up.”

BY MOON HEE-CHUL [chea.sarah@joongang.co.kr]

Copyright © 코리아중앙데일리. 무단전재 및 재배포 금지.

이 기사에 대해 어떻게 생각하시나요?