Soon-to-Arrive HBM4 Memory for Superior Bandwidth Increases

reading time Reading Time: 4 minutes

Soon-to-Arrive HBM4 Memory for Superior Bandwidth Increases

This is the time of year when people start thinking they just can’t get enough sunshine, but it’s natural to feel that way when the days get shorter and the weather tends to be darker and drearier. But nonetheless it’s an incredible free resource that contributes to our overall well-being in so many ways, and there’s people with seasonal affective disorder who actually become sick due to a lack of rays.

Now it may seem strange that we’d be talking about sunshine in a discussion of memory, bandwidth and anything tied into the way we are collectively utilizing digital technologies. But here’s the connection; we may be able to get by with shorter and darker days because we know eventually spring will come around again. But when it comes to bandwidth and the lack of it slowing down our movements online there’s no reprieve coming and there’s also a lot of the concept of induced demand too.

So building more and wider free ways to accommodate traffic never works, but with bandwidth there are real tangible benefits to expanding, improving, and optimizing memory capacities. What’s new isn’t new for very long here, but these advances are the kinds of stuff that will appeal to any good Canadian web hosting provider and that’s true for us here at 4GoodHosting too. For that reason the coming advent of HBM4 memory is definitely a blog-worthy topic, and so that’s where we’re going for this week.

Businessman pressing virtual button in quantum computing concept

Fast Data Transfer Rates

High-bandwidth memory (HBM) is much more considerable today than it was 10 years ago even, and the way it has supercharged data transfer rates is something we’ve all benefited from. Much more in the way of features has come along with it too, and it seems that the best is yet to come with HBM memory and courtesy of the super digitally-savvy developers at Samsung.

The new HBM4 memory is expected to become available to consumers through standard product fare by next year (2024) and will feature a 2048-bit interface per stack that is 2x as wide as HBM3's 1024-bit. This superior new memory is going to feature technologies optimized for high thermal properties in development, such as non-conductive film assembly and hybrid copper bonding.

This increase in interface width from 1024-bit per stack to 2048-bit per stack will likely constitute the biggest change in HBM memory technology ever seen. 1024-bit interfaces have been the norm for HBM stacks since 2015, and so over the course of 8 years many people have become accustomed to limitations created by them. This coming doubling up of capacity is going to be a treat for many of them as they make themselves newly accustomed to it.

Per-stack capacities of between 32GB and 64GB and peak bandwidth of 2 TB/s per stack or higher are going to facilitate major differences in operability right across the board. Let’s make it clear though that to build a 64GB stack you will need a 16-Hi stack with 32GB memory devices. To date nothing even close to that has been developed though, so it looks like such dense stacks will only hit the market alongside the introduction of HBM4.

Some TBD

All this rosy outlook may need to be tempered somewhat as we still don’t know whether memory makers will be able to take those ~9 GT/s data transfer rates supported by HBM3E stacks for HBM4 stacks with a 2048-bit interface and make them stick. But the belief is that with some trial and error over the next 6 months to a year they will and the increase in bus width will double peak bandwidth from 1.15 TB/s per stack to 2.30 TB/s per stack.

That’s power, space and flexibility in one tidy package, but we also need to be looking at how widening of a per-stack memory interface will affect the number of stacks processors and interposers can handle. We need to take today's massive processors into account with relation to the implementation of new and superior memory technologies.

Nvidia's H100 is a good example, and its support of six 1024-bit wide HBM3/HBM3E doesn’t go so well when its operating with a massive 6144-bit wide interface. However, if the interface of a single KGSD increases to 2048 bits then the question becomes do processor developers keep using the same number of HBM4 stacks, or do they need to find ways of reducing them will hopefully still maintaining high-performance standards.

All this said, HBM4 memory looks like it’s going to be a fantastic add, even though implementation of it still has a few building blocks that have yet to be put in place.