LG's K-Exaone breaks into global top 10 AI rankings

LG AI Research unveiled K-Exaone, a homegrown AI foundation model that entered the global top 10 at seventh place -- marking the only Korean presence in a ranking dominated by US and Chinese models.
The company said Sunday its latest AI model delivered the strongest overall performance among five teams in a government-led AI foundation model competition, topping 10 of 13 benchmark tests with an average score of 72.
Globally, K-Exaone ranked seventh on the Intelligence Index compiled by Artificial Analysis, standing as the only Korean model to enter the global top 10 open-weight rankings dominated by Chinese and US developers. China has six models in the list, led by Z.AI's GLM-4.7 model in the first place, while the US has three.
Released as an open-weight model on Hugging Face, K-Exaone briefly climbed to second place on the platform’s global model trend chart, reflecting strong interest from international leaders.
The company said it will offer free API access to K-Exaone through Jan. 28, allowing developers and companies to use the model without cost during the initial rollout period.
The model was further recognized by Epoch AI, a US-based nonprofit, which added K-Exaone to its list of “Notable AI Models.” LG AI Research has now placed five models on the list -- the most among Korean companies -- starting with Exaone 3.5 in 2024, followed by Exaone Deep, Exaone Path 2.0 and Exaone 4.0.
“We established the development plan according to the time and infrastructure we were given, and we developed the first-phase K-Exaone using about half the data we have,” said Lee Jin-sik, head of Exaone Lab at LG AI Research.

LG said the model marks the culmination of five years of in-house research and signals Korea’s entry into the global contest for frontier-class AI systems.
Rather than relying on scale alone, the institute said it redesigned the model’s architecture to boost performance while lowering training and operating costs.
K-Exaone adopts a mixture-of-experts (MoE) architecture with 236 billion total parameters, of which about 23 billion -- roughly 10 percent -- are activated per inference, enabling high performance with greater efficiency.
The model’s core technology, hybrid attention, enhances its ability to focus on critical information during data processing while reducing memory requirements and computational load by 70 percent compared with previous models.
The tokenizer was also upgraded by expanding its training vocabulary to 150,000 words and optimizing frequently used word combinations, improving document processing capacity by 1.3 times. The adoption of multi-token prediction boosted inference speed by 150 percent, further improving overall efficiency, LG said.
“K-Exaone is designed to maximize efficiency while reducing costs, allowing it to run on A100-class GPUs rather than requiring the most expensive infrastructure,” an LG AI Research official said.
“This makes frontier-level AI more accessible to companies with limited computing resources and helps broaden Korea’s AI ecosystem.”
Going beyond memorization, K-Exaone’s training focused on strengthening reasoning and problem-solving capabilities, the institute said.
During pretraining, the model was exposed to “thinking trajectory” data that emphasizes how problems are solved, not just the final answers. Posttraining incorporated proprietary reinforcement learning algorithms, including Agapo, which extracts learning signals from incorrect answers, and GrouPER, which refines outputs based on human preferences for natural language, LG explained.
Safety and compliance were also key priorities for the model. LG said it conducted data compliance reviews across all training datasets, excluding materials with potential copyright issues.
The company operates an internal AI ethics committee that evaluates risks across four categories: universal human values, social safety, Korea-specific considerations and future risks.
Under KGC-Safety, a Korea-specific safety benchmark developed by LG AI Research, K-Exaone scored an average of 97.38 across the four categories, outperforming OpenAI’s GPT-OSS-120B model (92.48) and Alibaba’s Qwen-3-235B model (66.16).
Copyright © 코리아헤럴드. 무단전재 및 재배포 금지.