LG's K-Exaone breaks into global top 10 AI rankings

Jo He-rim 2026. 1. 11. 14:24
음성재생 설정 이동 통신망에서 음성 재생 시 데이터 요금이 발생할 수 있습니다. 글자 수 10,000자 초과 시 일부만 음성으로 제공합니다.
글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

Comparison of K-Exaone's performance against US and Chinese models based on 13 shared benchmark tests (LG AI Research)

LG AI Research unveiled K-Exaone, a homegrown AI foundation model that entered the global top 10 at seventh place -- marking the only Korean presence in a ranking dominated by US and Chinese models.

The company said Sunday its latest AI model delivered the strongest overall performance among five teams in a government-led AI foundation model competition, topping 10 of 13 benchmark tests with an average score of 72.

Globally, K-Exaone ranked seventh on the Intelligence Index compiled by Artificial Analysis, standing as the only Korean model to enter the global top 10 open-weight rankings dominated by Chinese and US developers. China has six models in the list, led by Z.AI's GLM-4.7 model in the first place, while the US has three.

Released as an open-weight model on Hugging Face, K-Exaone briefly climbed to second place on the platform’s global model trend chart, reflecting strong interest from international leaders.

The company said it will offer free API access to K-Exaone through Jan. 28, allowing developers and companies to use the model without cost during the initial rollout period.

The model was further recognized by Epoch AI, a US-based nonprofit, which added K-Exaone to its list of “Notable AI Models.” LG AI Research has now placed five models on the list -- the most among Korean companies -- starting with Exaone 3.5 in 2024, followed by Exaone Deep, Exaone Path 2.0 and Exaone 4.0.

“We established the development plan according to the time and infrastructure we were given, and we developed the first-phase K-Exaone using about half the data we have,” said Lee Jin-sik, head of Exaone Lab at LG AI Research.

LG K-Exaone ranks seventh in the Artificial Analysis Intelligence Index. (LG AI Research)

LG said the model marks the culmination of five years of in-house research and signals Korea’s entry into the global contest for frontier-class AI systems.

Rather than relying on scale alone, the institute said it redesigned the model’s architecture to boost performance while lowering training and operating costs.

K-Exaone adopts a mixture-of-experts (MoE) architecture with 236 billion total parameters, of which about 23 billion -- roughly 10 percent -- are activated per inference, enabling high performance with greater efficiency.

The model’s core technology, hybrid attention, enhances its ability to focus on critical information during data processing while reducing memory requirements and computational load by 70 percent compared with previous models.

The tokenizer was also upgraded by expanding its training vocabulary to 150,000 words and optimizing frequently used word combinations, improving document processing capacity by 1.3 times. The adoption of multi-token prediction boosted inference speed by 150 percent, further improving overall efficiency, LG said.

“K-Exaone is designed to maximize efficiency while reducing costs, allowing it to run on A100-class GPUs rather than requiring the most expensive infrastructure,” an LG AI Research official said.

“This makes frontier-level AI more accessible to companies with limited computing resources and helps broaden Korea’s AI ecosystem.”

Going beyond memorization, K-Exaone’s training focused on strengthening reasoning and problem-solving capabilities, the institute said.

During pretraining, the model was exposed to “thinking trajectory” data that emphasizes how problems are solved, not just the final answers. Posttraining incorporated proprietary reinforcement learning algorithms, including Agapo, which extracts learning signals from incorrect answers, and GrouPER, which refines outputs based on human preferences for natural language, LG explained.

Safety and compliance were also key priorities for the model. LG said it conducted data compliance reviews across all training datasets, excluding materials with potential copyright issues.

The company operates an internal AI ethics committee that evaluates risks across four categories: universal human values, social safety, Korea-specific considerations and future risks.

Under KGC-Safety, a Korea-specific safety benchmark developed by LG AI Research, K-Exaone scored an average of 97.38 across the four categories, outperforming OpenAI’s GPT-OSS-120B model (92.48) and Alibaba’s Qwen-3-235B model (66.16).

Copyright © 코리아헤럴드. 무단전재 및 재배포 금지.