SK Hynix Is Spending $7.9 Billion on Hardware. Google Is Trying an Algorithm. Both Are Responding to RAMmageddon.
Memory shortages constraining AI infrastructure builds are expected to continue until at least 2027. The industry has a name for it: RAMmageddon. Two very different responses are taking shape.
The Supply Side Bet
SK Hynix makes high-bandwidth memory (HBM), the chip type that powers Nvidia's AI accelerators. The company announced a $7.9 billion deal to acquire advanced extreme ultraviolet (EUV) lithography scanners from ASML, with delivery expected by 2027. EUV scanners etch chip patterns at nanometer scale. No scanners, no leading-edge HBM.
That deal sits inside a broader expansion push. SK Hynix is building new fabrication facilities in South Korea (~$25 billion) and Indiana (~$3.3 billion). The Indiana plant reflects ongoing pressure on chipmakers to establish US capacity.
The long-range number is larger still. SK Hynix plans to invest approximately $400 billion by 2050 in a semiconductor cluster in Yongin, South Korea. That is a multi-decade bet on sustained AI compute demand.
The Software Side Bet
Google introduced TurboQuant, described as an ultra-efficient AI memory compression algorithm that allows AI systems to use memory vastly more efficiently. The specifics of compression ratios and performance tradeoffs are not included in the available information. The direction is clear regardless: if raw HBM supply is constrained, reduce how much the models need.
The Timeline Problem
The hardware investments do not solve the near-term crunch. The ASML scanners arrive by 2027. The RAMmageddon shortage is also expected to ease by 2027. SK Hynix is essentially building capacity that comes online when the existing gap is already projected to close.
TurboQuant, as software, presumably deploys faster. Whether memory compression can substitute for raw bandwidth in high-throughput inference at scale is a separate question worth watching.
What the $400 Billion Number Actually Says
A commitment stretching to 2050 is not a response to a two-year shortage. SK Hynix is pricing in decades of AI infrastructure demand. The Yongin cluster investment suggests the company views HBM as a long-duration growth market, not a cyclical spike.
Whether AI compute demand actually sustains at that scale through 2050 is the question the $400 billion number cannot answer.
Source: Techcrunch