[Column] Can ChatGPT think?

2023. 2. 13. 19:47
글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

Just because a chatbot can answer does not mean it understands.

Yun Suk-man

The author is an editorial writer at the JoongAng Ilbo. ChatGPT, or Chat Generative Pre-trained Transformer, has become a viral internet phenomenon. The chatbot offered by OpenAI in November 2022 drew more than 1 million monthly average users in just two months, replacing the records of other mobile sensations TikTok (9 months) and Instagram (30 months). President Yoon Suk Yeol reportedly became a fan of the chatbot after trying it out for his New Year’s address. So I checked how it can do that. I commanded the novel chatbot to write a presidential New Year’s address.

It immediately wrote an 800-word script starting with “My beloved people.” The chatbot said, “I am proud of the people who united to fight the pandemic.” It also added gratitude to the emergency rescue workers and medical staff. “Their courage and selflessness have rekindled the power of South Korea,” wrote the chatbot. The script ended with the president’s confidence to overcome economic difficulties and promise to “make the country stronger and progressive.”

The writing was excellent for 10 seconds of work. It could turn into a good draft work for the speech writer if he adds the philosophies and visions of the Yoon administration. I also asked the chatbot to a write a column for a newspaper. It produced a 1,200-word article pretty quickly. Although lacking in creativity and details, its writing was logical and well expressed.

ChatGPT passed graduate-level exams at the Wharton School at the University of Pennsylvania with a B to B- grade. The chatbot also passed the U.S. Medical Licensing Exam. Korean users also were awed. Singer-songwriter Yoon Jong-shin plans to work on lyrics with ChatGPT. He predicted that the artificial intelligence could make music better than humans in a few years.

But faith can help hide the truth. The chatbot is certainly a marvelous innovation, but it cannot replace human intelligence immediately. First of all, the chatbot cannot think for itself. Instead, it only mimics human language to come up with an appropriate answer after deep expansive learning.

[SHUTTERSTOCK]

The AI concept has long been disputed and most famously argued in the Turing test and Chinese Room conundrum. In 1950, Alan Turing first tested out machine’s ability to exhibit intelligent behavior through imitation test in his paper “Computing Machinery and Intelligence.” Turing concluded that if a computer can communicate with a person in a manner indistinguishable as person-to-person communication, or if a computer can behave like a human, then it can “think” just like a human. In 1980, American philosopher John Searle challenged Turing’s proposition with his Chinese Room thought experiment.

Searle put a non-Chinese speaker with a list of Mandarin characters and an instruction book explaining the rules according to which the sequences of characters can be formed without giving the meaning of the characters. Once the person in the room is acquainted with the rules and sequences, he can answer a question in sequencing game so that even a native Chinese person cannot spot the difference. Just because he can answer does not mean he understands the language, or can “think” in it, Searle argued.

Like other chatbots, ChatGPT is a large language model-driven (LLM) algorithm, trained on a vast text data to produce human-like responses to a dialogue and other natural language inputs. It cannot know the meaning of the dialogue or writings it creates, decipher the truth in them, or make judgments. It also avoids answering political or ethical questions. ChatGTP cannot be compared with the human intelligence and thinking influenced by the values, norms and culture of a society.

Just as calculator has helped facilitate higher-level mathematical studies, the chatbot evolution can be a useful tool to elevate human capacity. It will sharply lessen the time needed to find appropriate materials and help write a draft. When coupled with voice assistance services, the chatbot can be useful in helping care for the elderly or infants.

To use ChatGPT well, users should make the most of its merits while making up for the side effects. Over 130 British universities have issued a statement to cope with AI-led academic plagiarism. Mira Murati, the chief technology officer at OpenAI, warned against the possibility of the chatbot being used by “bad actors.”

A technology per se cannot judge right from wrong. Whether it is put to good or bad use is up to humans. The Korea Federation of ICT Organizations, which represents the ICT community under the auspice of the Ministry of Science and ICT, launched the Digital Society in October 2022 with the objective of promoting academic development of ICT and study on the impact on the society. Organizations like these can guide beneficial employment of technologies for the good of mankind through responsible and ethical application of such technology. We need to be more worried about the mechanization of humans than the humanization of machines.

Copyright © 코리아중앙데일리. 무단전재 및 재배포 금지.

이 기사에 대해 어떻게 생각하시나요?