top of page
Search

What’s this “HBM” that slashed Samsung Electronics’ operating profit in half?

💡 The hottest semiconductor topic these days: Samsung’s HBM struggles

The biggest buzz in the semiconductor world right now is Samsung’s weakening competitiveness in HBM. Things are so serious that Samsung Electronics Chairman Lee Jae-yong told his executives to face the crisis with a “do-or-die” mindset. Basically? It means nobody’s going home early.

Anyway, let’s break down what this HBM thing is—the one giving Samsung such a headache—so it’s easy to understand.


📌 First, you need to know what memory (RAM) is

Memory is the space where a computer or smartphone temporarily remembers what it needs to do right now.

For example, when you’re reading a webtoon, it remembers where you left off so you can pick up right there instead of starting from the beginning. It’s like a short-term scratchpad that stores what you immediately need.

Without memory, your device would have to use the hard disk for everything, which is so slow it’s basically unusable for anyone (especially in Korea).


🌟 HBM, explained really simply

HBM is advanced memory that processes data much faster and more intelligently than regular memory.

It’s an essential component for AI and high-performance computing to work smartly.

Thanks to TSV (Through-Silicon Via)—which directly connects stacked memory layers electrically—data can move up and down quickly between layers.

Sound confusing? Don’t worry, let’s make it even simpler.


✅ The difference between regular memory and HBM (with an example)

Regular memory struggles with AI or supercomputers.

Picture a big sports field with only one door to exit. No matter how many people there are, they have to leave one by one. It takes forever. That’s why traditional memory can’t keep up with AI workloads.

HBM is way faster and more efficient.

Now imagine the same field but with lots of doors. People can exit in multiple lines all at once. Suddenly it’s super quick.

That’s HBM’s principle. Instead of transferring data on a single flat layer like regular memory, HBM stacks multiple memory chips in 3D and connects the layers directly. This lets data move through many paths at once.


📌 TSV (Through-Silicon Via), made even easier

Old method = climbing stairs without an elevator:

Think of a building with only stairs. To get from the 1st to the 10th floor, you have to climb and wind around every floor. It takes time.

TSV = an elevator.

Now imagine there’s an elevator. Press a button on the 1st floor and go straight to the 10th.

TSV is like that elevator. It directly connects the stacked memory layers electrically, letting data travel quickly up and down.

SK하이닉스 is really good at this.


⚡️ HBM’s concrete advantages

HBM has much wider “data lanes” than regular memory.

It can transfer a lot more data at once, more quickly and easily. You can also pack more memory into the same space, use less power, and produce less heat, which saves costs.

That’s why it helps AI do all those complex calculations faster—like talking naturally or generating images.

HBM is a critical part of supercomputers and high-end graphics cards.


💾 Why is HBM so important in the AI era?

For AI to talk, draw, or solve problems like humans, it needs to process massive amounts of data super fast.

Even with powerful CPUs or GPUs, if the memory is slow, they end up just waiting around for data. That’s called a bottleneck.

HBM breaks that bottleneck wide open.

That’s why companies like NVIDIA, Google, and Amazon have to use HBM in their AI chips.


AI needs fast memory, and HBM delivers.

Think of it like a master craftsman with an assistant. If the assistant is right there handing over tools instantly, the work goes quickly. But if the assistant is slow or far away, the craftsman has to walk to the storage room (the hard disk) every time, wasting tons of time.

HBM is the smart, speedy assistant right at your side.


💾 The competition between Samsung Electronics, SK하이닉스, and Micron

Right now, Samsung, SK하이닉스, and Micron are fiercely battling in the HBM market.

Whoever can make and supply these chips faster and better wins money and market share.


Especially companies like NVIDIA, which make AI chips, ask: “Can you meet our exact specs?” They’re extremely picky, since the memory they choose directly impacts their AI performance.

Samsung fell behind, and that’s a big deal because Samsung’s supposed to be the best.


🎯 Why did Samsung run into trouble recently?

The problem is that Samsung’s latest HBM products—like HBM3 and HBM3E—were slow to get certified.

They needed to ship quickly to big customers like NVIDIA, but delays pushed back supply.


Meanwhile, SK하이닉스 worked closely with NVIDIA and delivered stable, cutting-edge products. That boosted their sales while Samsung fell behind.


Samsung saw the AI boom coming and expanded factories and production. But because certification was late and inventory piled up, they spent a ton of money without selling enough in time.


As a result, their operating profit in Q2 2025 dropped by a whopping 56% year-on-year.

HBM is so important that this slump shook Samsung’s entire semiconductor business.


💰 Samsung’s next challenge: next-gen products

So is Samsung just giving up? No way.

They’re increasing R&D investment to develop even faster and more efficient next-gen HBM4 products.


That’s how they can win back big clients like NVIDIA and compete with SK하이닉스 to recover market share.


The semiconductor game is all about technological leadership.

SK하이닉스 is ahead for now, but the landscape can always shift.


Samsung’s current strategy is basically: “We’re late, but we’ll catch up.”


Subscribe to Our Newsletter

  • White Facebook Icon

© 2035 by TheHours. Powered and secured by Wix

bottom of page