HBM2, an abbreviation of High Bandwidth Memory 2, is a high-performance memory interface that is the upgraded version of HBM. It is used in high-performance computing, servers, networking, graphics cards, and other applicable solutions. The JEDEC (Joint Electron Device Engineering Council) Solid State Technology Association provided HBM2 as the industry standard in January 2016.
Like HBM, HBM2 uses a 3D-stacked DRAM wide-interface architecture. This allows it to provide high bandwidth in addition to lower power consumption. However, where HBM2 differentiates its self from HBM is that it specifies up to 8 dies per stack and doubles pin transfer rates up to 2 GT/s, while still retaining a 1024‑bit wide interface.
The overall power savings provided by HBM2 is more than 40% when compared to GDDR5 DRAM and 8% when compared to HBM as per the analysis provided by SK Hynix. Initially, HBM2 could reach up to 256 GB/s memory bandwidth per stack. However, in 2018, the JEDEC updated the HBM2 specification. The new specification allows for 12 dies and up to 307 GB/s memory bandwidth per stack.« Back to Definition Index