Samsung, SK paths diverge on high-bandwidth flash

As the artificial intelligence market pivots toward inference, Samsung Electronics and SK hynix are moving to develop high-bandwidth flash, a so-called extension of high-bandwidth memory, though their strategies in this next phase of the AI memory race diverge.
With HBM now a key component of AI chips, the two Korean memory makers are seeking an early foothold in the next-generation memory cycle.
“The NAND market has faced a prolonged slowdown driven by weak demand and falling prices, but HBF could open up a new demand base in AI data centers once it reaches commercialization,” said an industry source who requested anonymity.
According to industry sources on Wednesday, Samsung Electronics and SK hynix are taking different approaches to HBF. SK hynix is maintaining an HBM-centered strategy while positioning NAND-based HBF as a complementary solution. Samsung, by contrast, is seen as redefining the role of the technology within a broader restructuring of AI memory and storage architecture.
HBM is designed for ultra-fast computation. HBF, on the other hand, acts as a supporting layer for large-scale data storage and efficient transfer. It delivers about 80 to 90 percent of HBM’s speed, but with 8 to 16 times the capacity and roughly 40 percent less power consumption. For large AI training and inference servers, it is widely described as a “mid-tier memory” that can relieve HBM bottlenecks.
Its key advantage is capacity. While HBM prioritizes processing speed, HBF can scale up to 10 times in volume at lower cost, making it a candidate to address both price and capacity constraints tied to HBM.
Structurally, HBF is developed by stacking multiple layers of NAND flash, similar to how HBM is built from stacked DRAM. First-generation products are expected to stack 16 layers of 32 gigabytes NAND flash, offering around 512GB in total.
SK hynix said during an earnings call last week that it is developing HBF as an extension of HBM. The chipmaker is aiming to begin mass production of HBF next year and is also working with SanDisk on the development of next-generation NAND-based memory and related international standardization efforts.
At the center of its efforts is AIN B, a bandwidth-enhanced design using stacked NAND. SK hynix is exploring setups where HBF is paired with HBM to help offset capacity limitations in AI inference systems. It is also expanding its ecosystem through collaboration with global tech firms and participation in Open Compute Project events.
Samsung Electronics, meanwhile, is emphasizing a broader overhaul of AI memory and storage architecture. At a recent global storage event, it outlined AI infrastructure needs — performance, capacity, thermal management and security — while introducing a unified tier that integrates memory and storage.
Leveraging its foundry division’s logic design and process expertise, Samsung is also reviewing ways to improve control performance and power efficiency in next-generation NAND-based solutions.
Industry observers view Samsung’s approach as an effort to reshape the next-generation memory and storage landscape — HBF included — around AI inference environments.
“As AI demand continues to grow, the center of gravity in the memory market is rapidly shifting away from conventional DRAM and NAND toward high-bandwidth products,” said another industry source who requested anonymity.
“Competition for leadership is likely to intensify over the next two to three years, particularly around technologies such as HBF and the sixth-generation HBM4, which are set to become key components of future data center infrastructure.”
According to securities firms, the HBF market is projected to grow from $1 billion in 2027 to $12 billion by 2030. With its ability to boost bandwidth while scaling capacity, it is seen as a key technology to meet rising demand in AI data centers.
Still, considering that HBM — developed in 2015 — took seven to eight years to gain traction, HBF may also face a long ramp-up. Even so, analysts note that the NAND flash sector appears to be entering the early stages of a broader shift.
Kim Joung-ho, a professor at the school of electrical engineering at Korea Advanced Institute of Science and Technology, who pioneered the basic structure and concept of HBM, said at a briefing in Seoul on Wednesday that demand for HBF will surpass that of HBM starting in 2038.
“When the memory-centric computing architecture — in which CPUs, GPUs and memory are organically integrated on a single base die — is fully realized, the required volume of HBF will increase notably,” Kim said.
Industry observers expect HBF to be included in new platforms from Nvidia. The professor stressed that following HBM, Korean memory makers such as Samsung and SK hynix must also secure leadership in HBF to maintain influence in the global AI market.
Copyright © 코리아헤럴드. 무단전재 및 재배포 금지.