AI acceleration: SK hynix fast-tracking HBM4 on Nvidia's request
전체 맥락을 이해하기 위해서는 본문 보기를 권장합니다.
"Integrating AI into the chip industry requires diverse approaches and solutions," Chey said. "Samsung has more resources and technologies than we do, and I am confident that it will also achieve great results in the AI wave."
"The market for 16-layer chips is expected to open up from HBM4 models," said SK hynix CEO Kwak Noh-jung at the session. "Anticipating this trend, SK hynix is developing 48 gigabyte 16-layer HBM3E chips to secure technological stability, and will begin supplying samples to our clients from early next year."
이 글자크기로 변경됩니다.
(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.
SK hynix is moving up the initial delivery schedule for its high bandwidth memory 4 (HBM4) chips by six months at Nvidia’s request, and will also start producing its 16-layer HBM3E chips with the largest capacity to date at 48 gigabytes early next year, asserting its dominance in the AI accelerator market.
With the announcement, SK hynix shares surged 6.48 percent to close at 194,000 won ($141.25) on Monday.
The push came from none other than Nvidia CEO Jensen Huang, according to SK Chairman Chey Tae-won during his keynote speech on Monday for the SK AI Summit 2024 held at southern Seoul’s Coex.
“So I agreed to try and shorten the timeline by six months,” Chey said.
“Now I’m a bit nervous to meet Huang again,” he said half-jokingly. “We’re worried he might ask us to speed it up even further.” SK hynix is the main supplier of HBM to Nvidia, staying ahead of crosstown rival Samsung Electronics, which is still waiting for much-needed approval to supply its fifth-generation HBM3E chips to the AI chip goliath. SK hynix, on the other hand, began supplying these chips to Nvidia from March.
When pressed about his view of SK overtaking Samsung in the chip market in a post-speech interview, Chey remained cautious, commenting that a direct comparison wouldn’t be proper.
“Integrating AI into the chip industry requires diverse approaches and solutions,” Chey said. “Samsung has more resources and technologies than we do, and I am confident that it will also achieve great results in the AI wave.”
To expedite HBM4 delivery, the timeline will depend on next year’s progress, Chey said.
"The request [from Huang] was about whether we could see deliver samples sooner, and we expressed that if the customer demands it, we would try. […] Advancing technology doesn’t happen because we decide to, it has to meet all the standards and requirements for mass production.”
“The market for 16-layer chips is expected to open up from HBM4 models,” said SK hynix CEO Kwak Noh-jung at the session. “Anticipating this trend, SK hynix is developing 48 gigabyte 16-layer HBM3E chips to secure technological stability, and will begin supplying samples to our clients from early next year.”
It is the first time that SK hynix officially admitted to the release of its 16-layer version.
HBM is a stack of dynamic random access memory (DRAM) chips for faster data processing that has been in the spotlight amid the AI boom as AI accelerators.
The performance of 16-layer HBM3E showed improvement of 18 percent in training and 32 percent in inference compared to the preceding 12-layer products.
The chipmaker reported record-high quarterly profit in its third quarter earnings last month, driven by its competitive edge in AI memory chips. It celebrated a turnaround by logging 7 trillion won in operating profit compared to a 1.8 trillion won loss from a year ago, as well as its highest-ever quarterly revenue of 17.6 trillion won.
The company aims to deliver sixth-generation 12-layer HBM4 chips next year, and a 16-layer version by 2026.
The Nvidia chief appeared in an interview clip prepared by SK and emphasized the need for further progress in HBM to push AI development forward.
“The road map of HBM memory is excellent but frankly, I wish we got more bandwidth with lower energy,” Huang said. “So the road map that SK hynix is on is super aggressive and is super necessary.”
SK is also investing heavily in building physical AI infrastructure, starting with the opening of an AI data center test bed in Pangyo, Gyeonggi, next month. The facility will operate on Nvidia’s latest chips and SK hynix’s HBM, as well as the latest liquid cooling solutions and technologies for energy optimization.
Its telecom affiliate, SK Telecom, will transform its existing Gasan data center to an AI data center with a power density of 44 kilowatts per rack for stable GPU operation, and operate its “GPU as a service” technology, or GPUaaS, which enables enterprises to utilize GPUs in a cloud environment without directly purchasing the chips needed for AI service development.
The service is part of SKT’s partnership with U.S.-based GPU cloud service provider Lambda, and the Gasan center will operate on Nvidia’s H100 chips next month, with an aim to bring in the latest H200 chips in a first for Korea next March.
It will also invest 100 billion won to establish a large-scale neural processing unit (NPU) farm integrating Rebellions’ NPUs, SK hynix’s HBM and various AI data center solutions from SKT to ultimately build a independent AI ecosystem in collaboration with the government, major companies and cloud providers.
BY LEE JAE-LIM [lee.jaelim@joongang.co.kr]
Copyright © 코리아중앙데일리. 무단전재 및 재배포 금지.
- Two-thirds of Koreans oppose sending weapons to Ukraine, poll suggests
- Korean celebrities congratulate Lee 'Faker' Sang-hyeok, T1 on League of Legends world title
- Coffee with a view of… North Korea? Starbucks eyes border observatory opening.
- G-Dragon arrives at Incheon International Airport in Tesla Cybertruck
- Yulhee files lawsuit against ex-husband FTIsland's Choi Min-hwan
- Army officer awaiting promotion admits to killing woman, dismembering corpse
- SBS penalized, producer removed after NewJeans flashes iPhone in performance
- 'It's a slaughter': Video claiming to show North Korean soldier captured in Kursk emerges
- Korean YouTuber arrested for latest assault on U.S. streamer Johnny Somali
- Foundry woes push Samsung to weigh once-unthinkable collaboration with TSMC