R&D spending at Naver, Kakao hits record high in H1 amid hyper-scale AI race

2023. 8. 18. 10:06
글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

[Image source: Gettyimagesbank]
South Korean platform giants Naver Corp. and Kakao Corp. have spent record amounts of money on research and development (R&D) in the first half of this year amid fierce competition among global tech companies for leadership in generative artificial intelligence (AI).

Naver plans to unveil its own large language model (LLM), HyperCLOVA X, on August 24, and Kakao KoGPT 2.0, an upgrade of its hyper-scale AI model, later this year.

According to data from the Financial Supervisory Service on Thursday, R&D spending of Naver amounted to 964.95 billion won ($721.5 million) in the first half of this year, up 15.2 percent from a year ago while of Kakao stood at 544.74 billion won, up 6.6 percent.

It is the highest level of R&D spending in the first half of the year. A lion share of the expenditure is known to have been spent on AI development.

According to Naver’s half-year report, the company’s R&D expense accounted for 20.6 percent of its entire sales of 4.69 trillion won in the first half of the year, similar to 21.5 percent in the previous year. The report showed that a total of 157 R&D projects are underway at Naver.

Recently, Naver sold a 45.08 percent stake of its real estate fund for Pangyo Tech One Tower to GIC Private Ltd., a private fund owned by the Government of Singapore, for 350 billion won in order to make a large-scale investment to boost its AI business.

Developing a generative AI foundation model takes a huge amount of data and computing power. Light weighting and optimizing the AI model in service commercialization afterwards also costs tens to hundreds of billions of won.

At Kakao, R&D spending accounted for 14.4 percent of its entire sales of 3.78 trillion won in the first half of the year. R&D expenses are on pace to remain at a similar level for two consecutive years after exceeding 1 trillion won for the first time last year at 1.21 trillion won.

Currently, Kakao is testing LLMs of various sizes, including those of 6 billion, 13 billion, 25 billion, and 65 billion parameters, to find the optimum model that strikes the right balance between accuracy and cost-effectiveness.

“In the second half of the year, Kakao will release lightweight language models that can be operated more economically and combined into vertical services in various areas with speed,” said Hong Eun-taek, chief executive officer of Kakao.

Copyright © 매일경제 & mk.co.kr. 무단 전재, 재배포 및 AI학습 이용 금지

이 기사에 대해 어떻게 생각하시나요?