SK hynix unveils HBM3E, supplies sample to Nvidia

2023. 8. 21. 15:33
자동요약 기사 제목과 주요 문장을 기반으로 자동요약한 결과입니다.
전체 맥락을 이해하기 위해서는 본문 보기를 권장합니다.

"By increasing the supply share of high-value HBM products, SK hynix will also seek a fast business turnaround."

"We look forward to continuing our collaboration with HBM3E to deliver the next generation of AI computing."

글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

World's 2nd-largest memory chipmaker seeks to cement leadership in burgeoning HBM market
SK hynix HBM3E (SK hynix)

South Korean chipmaker SK hynix announced Monday it has successfully developed HBM3E, the next generation top performing DRAM chip for artificial intelligence applications. The chipmaker also said its customer is evaluating the sample of the new product.

High Bandwidth Memory is a high-value, high-performance memory that vertically interconnects multiple DRAM chips, enabling a dramatic increase in data processing speed in comparison to earlier DRAM products. HBM3E is the extended version of the HBM3 and the 5th generation of its kind, succeeding the previous generations of HBM, HBM2, HBM2E and HBM3.

The world’s second-largest memory chip maker said the latest development of HBM3E, which is the extended version of HBM3, comes on top of its experience as the industry’s sole mass provider of HBM3.

“The company, through the development of HBM3E, has strengthened its market leadership by further enhancing the completeness of the HBM product lineup, which is in the spotlight amid the development of AI technology,” said Ryu Sung-soo, head of DRAM product planning at SK hynix.

“By increasing the supply share of high-value HBM products, SK hynix will also seek a fast business turnaround.”

The chipmaker said it plans to mass-produce HBM3E from the first half of next year, to cement unrivaled leadership in AI memory market. Nvidia, a prominent GPU manufacturer, is currently evaluating the sample of SK's new product.

“We have a long history of working with SK hynix on High Bandwidth Memory for leading edge accelerated computing solutions,” said Ian Buck, vice president of hyperscale and HPC computing at Nvidia.

“We look forward to continuing our collaboration with HBM3E to deliver the next generation of AI computing.”

According to the company, the latest product not only meets the industry’s highest standards of speed -- the key specification for AI memory products -- but also all categories including capacity, heat dissipation and user-friendliness.

SK hynix said HBM3E can process data up to 1.15 terabytes a second, which is equivalent to processing more than 230 full-HD movies of 5 GB each in a second.

The product also comes with a 10 percent improvement in heat dissipation by adopting the cutting-edge technology of the Advanced Mass Reflow Molded Underfill, or MR-MUF2, onto the latest product, the company said. It also provides backward compatibility that enables the adoption of the latest product, even onto systems that have been prepared for the HBM3 without a design or structure modification, SK hynix added.

While HBM3 is the most up-to-date version of the chip in the market, the HBM3e memory that would be used in Nvidia’s newest AI chip is 50 percent faster, Nvidia said. HBM3e delivers a total of 10 terabytes per second of combined bandwidth, allowing the new platform to run models 3.5 times larger than the previous version, while improving performance with three times faster memory bandwidth, Nvidia said.

SK hynix has been considered the front-runner in the HBM market, taking up almost 50 percent of the market share as of 2022, according to data released by TrendForce. For this year, the market tracker expects both Samsung Electronics and SK hynix to each take 46 to 49 percent of the market share, and Micron Technology to take 4 to 6 percent.

By Jo He-rim(herim@heraldcorp.com)

Copyright © 코리아헤럴드. 무단전재 및 재배포 금지.

이 기사에 대해 어떻게 생각하시나요?