[Column] High-performance AI, a double-edged sword
전체 맥락을 이해하기 위해서는 본문 보기를 권장합니다.
Microsoft co-founder Bill Gates told Reuters that the pause in the development of AI won't "solve the challenges" ahead. "I really don't understand who they're saying could stop, and would every country in the world agree to stop, and why to stop," he said. "Clearly, there's huge benefits to these things. What we need to do is identify the tricky areas."
Generative models and other AI programs still have many limitations before they become commonplace. Key issues are "hallucinations" and "jailbreaks."
이 글자크기로 변경됩니다.
(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.
Yoo Chang-dongThe author is a professor of electrical engineering at KAIST. ChatGPT — the artificial intelligence (AI)-powered language model developed by OpenAI — has become phenomenal. It is being used by more than 25 million people a day around the world to help them summarize lengthy texts or even compose songs through human-like conversation. The recently released upgraded version — or Generative Pre-trained Transformer (GPT)-4 — can accept not just text inputs, but images as well.
The latest innovation has received complaints, as well as praise. In an open letter co-written a week after the GPT-4 was released last month, a group of AI experts, researchers, donors and industry voices called for a six-month “public and verifiable” pause to all activities of training and developing AI systems like the GPT-4 to assess their potential risks to the society.
Among the more than 1,100 signatories were Elon Musk, who co-founded OpenAI, and Apple co-founder Steve Wozniak, not to mention a handful of notable scientists, like cognitive doyen Gary Marcus and engineers at Amazon, DeepMind, Google, Meta and Microsoft.
The letter comes amid a heated race involving big tech and start-ups to advance powerful AI models for commercial purposes, which triggered concerns over unregulated — and uncontrollable — AI models of novelty. Time-out has been called as a warning to the rush so as to buy time and explore the benefits and potential dangers before further advance.
AI researchers pointed out “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” urging governments to step in and institute a moratorium if their call is ignored. But other experts argued that the letter focuses too much on long-term risks, while racial and gender bias and other issues demand more urgent attention.
Microsoft co-founder Bill Gates told Reuters that the pause in the development of AI won’t “solve the challenges” ahead. “I really don’t understand who they’re saying could stop, and would every country in the world agree to stop, and why to stop,” he said. “Clearly, there’s huge benefits to these things. What we need to do is identify the tricky areas.”
Generative models and other AI programs still have many limitations before they become commonplace. Key issues are “hallucinations” and “jailbreaks.”
They can hallucinate with wrong or irrelevant information to make up plausible answers. For instance, if you ask an AI program which city on the Moon is a good place to live in, it can make up an answer that Lunar City, the capital of Moon, is the best place to live in.
Jailbreaking refers to activities that tech-savvy users employ for AI programs to produce outcomes undesired by the developers for ethical or security reasons. For instance, hackers can jailbreak AIs to crack the security code of a computer program.
Hallucination and jailbreaking stem from the limitations of the datasets or fallacies in machine learning process. The problems can occur during the processing of large texts by a large language model.
If AI systems are supplied for civilian or public use without addressing the problems, side effects may not be small. If AIs are used to spread malicious rumors and slander against a certain person, the psychological and material damage could be severe.
Some forces could train an AI program to make a certain argument in their favor and shun opposing views to harm healthy communications among members of society.
[SHUTTERSTOCK]
The problems from AI’s limitations could pose even graver consequences. AI can automatically create programs and control sensors and equipment connected to computers and the internet. AI generating malicious program to control networks and equipment may not be a fictional scenario.
If evil individuals or groups abuse AI to cause paralysis to our communication and traffic systems, many lives could be in danger and the society may have to pay a dear price to fix the problem. The advances in AI models without clear ethical and safety standards can cause serious damage beyond the digital realm.
The super AI capabilities are a double-edged sword. They can contribute greatly to mankind’s advance if they are used by people with good conscience. But they can bring about a disaster if they are abused by people with malicious intentions. The spread of unregulated AI models can do more harm than good.
AI developers are required to thoroughly examine if their models are really ethical and safe before releasing them and commercializing them for profit. Society must not blindly encourage AI development. Instead, it must establish institutional mechanisms, including an education system, for people to use AI properly.
Translation by the Korea JoongAng Daily staff.
Copyright © 코리아중앙데일리. 무단전재 및 재배포 금지.
- Thousands evacuated as fire rages in Gangwon
- K-pop idols go solo to keep career alive, and they're succeeding
- 18 billion won of crypto hacked from Korea's GDAC exchange
- Lee Ki-young, 31, is suspect in murder of taxi driver in Gyeonggi
- Korean Defense Ministry denies security breach after U.S. spying report
- Suga’s first solo album features late Japanese composer Ryuichi Sakamoto
- Body of Whang Ki-whan, early independence activist, returns home
- Kospi breaks 2,500 as battery makers rally on inflation bill tax credit
- Death of child killed by drunk driver sparks anger
- Kospi surpasses 2,500 mark for first time in four months