로그인|회원가입|고객센터|HBR Korea
페이지 맨 위로 이동
검색버튼 메뉴버튼

SK Hynix

SK Hynix Mass-Produces 192GB SoCAMP2, Optimized for Rubin

Dong-A Ilbo | Updated 2026.04.21
75% more energy efficient than existing products
Rapidly emerging as a core pillar of AI memory
 
SK hynix announced on the 20th that it has begun mass production of its next-generation memory module “SOCAMM (SOCAMM) 2” 192GB (gigabyte, photo) product, which will be mounted on NVIDIA’s latest artificial intelligence (AI) accelerator “Berta Rubin.” SK hynix explained that SOCAMM 2 was optimized for Berta Rubin from the design stage.

SOCAMM is a data-center memory module that integrates low-power (LP) double data rate (DDR) DRAM. LPDDR was originally a type of DRAM used in mobile products such as smartphones and tablet PCs, but it has recently emerged as a major AI memory thanks to its low power consumption characteristics. In the semiconductor industry, SOCAMM is regarded, together with high bandwidth memory (HBM), as one of the two main pillars of AI memory.

SOCAMM is drawing attention because AI performance criteria have expanded from the previous focus on “training” to now include “inference.” In the inference process, where AI filters and presents answers to actual users, the key is to reduce heat generation caused by excessive power usage and thereby eliminate “data bottlenecks.” The industry’s recent assessment is that, while HBM has overwhelmingly superior data input/output performance, it is not fully optimized for inference due to issues such as power efficiency and heat generation. In response, NVIDIA designed Berta Rubin so that SOCAMM 2 is placed next to the central processing unit (CPU) “Berta” to reduce power consumption, while HBM is placed next to the graphics processing unit (GPU) “Rubin” to handle high-speed computation.

SK hynix’s SOCAMM 2 uses the sixth-generation (1c) “LPDDR5X” process, with a circuit line width in the 10nm (nanometer; 1nm is one-billionth of a meter) class. The 1c node is currently the most advanced commercial memory process. SK hynix stated, “SOCAMM 2 is optimized for high-performance AI computation, offering more than twice the bandwidth and over 75% improved energy efficiency compared with existing server DRAM.”

Park Jong-min

AI-translated with ChatGPT. Provided as is; original Korean text prevails.
Popular News

경영·경제 질문은 AI 비서에게,
무엇이든 물어보세요.

Click!